Windows NT vs. Unix: A design comparison

643 pointsposted 9 days ago
by LorenDB

625 Comments

phendrenad2

9 days ago

One thing I don't see mentioned in this article, and I consider to be the #1 difference between NT and Unix, is the approach to drivers.

It seems like NT was designed to fix a lot of the problems with drivers in Windows 3.x/95/98. Drivers in those OSs came from 3rd party vendors and couldn't be trusted to not crash the system. So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).

Compare that to any Unix. Historic, AT&T Unix, Solaris, Linux, BSD 4.x, Net/Free/OpenBSD, any research Unix being taught at universities, or any of the new crop of Unix-likes such as Redox. Universally, the philosophy there is that drivers are high-reliability components vetted and likely written by the kernel devs.

(Windows also has a stable driver API and I have yet to see a Unix with that, but that's another tangent)

sedatk

8 days ago

> and a graphics driver interface that disables itself if it crashes too many times

That feature is one of the great ones that came with Windows Vista.

ruthmarx

8 days ago

It really was nice to be able to at least still use the system if the display driver is crashing. 800x600 at 16 bit or whatever it was is still better than nothing.

kevincox

6 days ago

And most importantly it is enough to debug and fix the system.

hernandipietro

8 days ago

Windows NT 3.x had graphics and printer drivers in user-mode for stability reasons. Windows NT 4 moved them to Ring 0 to speed-up graphics applications.

IntelMiner

8 days ago

Then almost immediately took them back out after realizing this was a bad idea

chrisfinazzo

8 days ago

I presume this reversal happened during NT's main support window?

deaddodo

8 days ago

Well, by "immediate" they mean: "was user space through Win2k, went to kernel space in XP to match 9x performance, reversed in Vista".

So...one generation, and about 7 years later.

Narishma

4 days ago

They were moved into kernel space in NT 4.

jonathanyc

8 days ago

> and a graphics driver interface that disables itself if it crashes too many times (yes really)

I actually ran into this while upgrading an AMD driver and was very impressed! On Linux and macOS I was used to just getting kernel panics.

It’s too bad whatever system Crowdstrike hooked into was not similarly isolated.

baq

8 days ago

APIs used by crowdstrike et al are also what made WSL1 unworkable performance-wise. Can’t have security without slowness nowadays it seems.

szundi

8 days ago

Cost is all kind of inconvenience

Dalewyn

8 days ago

>So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).

Pretty sure most of this was already in place with Windows 95; I know Safe Mode definitely was along with a very basic VGA driver that could drive any video card in the world.

deaddodo

8 days ago

Safe Mode existed in Win9x.

User space drivers that could restart without kernel panicking didn't exist until Windows Vista (well, on the home user side of Windows).

Fallback drivers were never a thing on Win9x, you would have to go into safe mode to uninstall broken drivers that wouldn't allow a boot (typically graphics drivers); or manually uninstall/replace them otherwise.

Dalewyn

8 days ago

Fallback drivers existed, because how else would Safe Mode drive the video card so you can see something to operate the borked computer?

Also used when Windows doesn't have a specific driver yet, like immediately after a clean install.

deaddodo

7 days ago

> Fallback drivers existed, because how else would Safe Mode drive the video card so you can see something to operate the borked computer?

If you use "fallback" in a colloquial sense, yes the VESA/VGA driver that Windows "fell-back" on was a fallback driver.

Fallback Drivers (proper noun) is a distinct concept of last functional or alternative drivers that Windows would use instead, if the primary one failed. They fell in-between Windows' generic drivers and your current installed one.

The latter is what is being referred to in this conversation.

Dalewyn

7 days ago

I'm not aware of "primary" drivers if that's a thing. I know that Safe Mode always falls back to the drivers Windows has by default because that's the whole point of Safe Mode.

deaddodo

7 days ago

Then you shouldn't be discussing, with authority, a topic you have no understanding/knowledge of.

Pretty simple.

Dalewyn

7 days ago

Been using Windows since 3.1, my dude. I know Windows like the back of my hands.

You can have multiple drivers for a device installed, but Windows will usually get very confused and/or the drivers will get very confused with themselves and things will get very janky very quickly.

It's really not worth the pain and hilarity compared to just KISS'ing with one driver and nuking and installing as desired. DDU was invented for a reason.

Anyway, Windows in Safe Mode will always fall back to its own default store of drivers. You can explicitly enable an exception for NICs, but otherwise no third-party drivers are loaded because the entire point of Safe Mode is to get Windows to boot using known-good, Microsoft-guaranteed drivers.

deaddodo

6 days ago

> Been using Windows since 3.1, my dude. I know Windows like the back of my hands.

As have I. Congrats, your anecdote is now useless as a point of authority.

brontitall

7 days ago

> (Windows also has a stable driver API and I have yet to see a Unix with that, but that's another tangent)

Solaris has (or at least had) DDI (Device Driver Interface) and DLPI (Data Link Provider Interface)

user

8 days ago

[deleted]

hulitu

5 days ago

> It seems like NT was designed to fix a lot of the problems with drivers in Windows 3.x/95/98

NT was running in parallel with Windows 3.x/9x. The NT had better drivers because it was multiuser with some memory protection.

Just look at the recent Crowdstrike incident to see what they really fixed. /s

nullindividual

9 days ago

There's a large debate whether a 'hybrid' kernel is an actual thing, and/or whether NT is just a monolithic kernel.

The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.

io_uring would be the first true non-blocking async I/O implementation on Unicies.

It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler. This pulls back that 'feature history' by a decade or more. Still not as old as UNIX, but "old enough", one could argue.

[0] https://stackoverflow.com/questions/8768083/difference-betwe...

emily-c

9 days ago

Before VMS there was the family of RSX-11 operating systems which also had ASTs (now called APCs in NT parlance), IRPs, etc. Dave Cutler led the RSX-11M variant which significantly influenced VMS. The various concepts and design styles of the DEC family of operating systems that culminated in NT goes back to the 1960s.

It's sad that the article didn't mention VMS or MICA since NT didn't magically appear out of the void two years after Microsoft hired the NT team. MICA was being designed for years at DEC West as part of the PRISM project.

rbanffy

9 days ago

In many ways NT was a new, ground up implementation of “VMS NT”.

It started elegant, but all the backwards compatibility, technical debt, bad ideas, and dozens of versions later, with an endless list of perpetual features driven by whoever had a bigger wand at Microsoft at the time of their inception, takes a toll. Windows now is much more complicated than it could be.

It shocks me some apps get Windows NT4 style buttons even on Windows 11.

emily-c

9 days ago

>In many ways NT was a new, ground up implementation of “VMS NT”.

Most definitely. There was a lot of design cleanup from VMS (e.g. fork processes -> DPCs, removing global PTEs and balance slots, etc), optimizations (converging VMS's parallel array structure of the PFN database into one), and simplification (NT's Io subsystem with the "bring your own thread" model, removing P1 space, and much more). SMP was also designed into NT from the beginning. You can start seeing the start of these ideas in the MICA design documents but their implementation in C instead of Pillar (variant of Pascal designed for Mica) in NT was definitely the right thing at the time.

heraldgeezer

9 days ago

>It shocks me some apps get Windows NT4 style buttons even on Windows 11.

This is good, though. The alternative is that the app won't run at all, right? Windows NT is good because of that background compatibility, both for business apps and games.

rbanffy

8 days ago

> The alternative is that the app won't run at all, right?

The alternative is that the application displays with whatever the current GUI uses for its widgets.

radicalbyte

8 days ago

Under Windows it's very rare to have trouble to running software. When you have trouble it's usually due to some security considerations or because you're using something which has roots in other operating systems.

MacOS & Linux are nothing like this. You can run most software, as most of the basis for modern software on those stacks is available in source form and can be maintained. Software which isn't breaks.

Apple/Google with their mobile OSes take that a step further, most older software is broken on those platforms.

The way they've kept compatibility within Windows is something I really love about the platform.. but it I keep wondering if there's a way to get the best of both worlds. Can you keep the compatibility layer as an adhoc thing, running under emulation, so that the core OS can be rationalised?

gunapologist99

8 days ago

In fairness, closed source software is a very very tiny minority of the software available on Linux, which is why ABI backwards-compatibility hasn't been much of a concern. In that respect, it's essentially the polar opposite of Windows and even MacOS.

However, it'd be very nice if it did become more of a focus (especially in the glibc/audio/glx areas), especially now that gaming has become very popular on Linux.

Trying to get old, closed-source games like Unreal Tournament to work on Linux can be a real chore.

radicalbyte

7 days ago

I'm not so sure, I like the Linux model of 99.999% of the code you'll run being available in source form. The result is that we have that code running everywhere.

I strongly dislike the Apple model.

TheAmazingRace

8 days ago

Fun fact. If you increment each letter of VMS by one, you get WNT. If that isn't on purpose, it's a convenient coincidence.

markus_zhang

9 days ago

How do you get Windows NT4 style buttons on 11? That's something I want to do with my application!

dspillett

9 days ago

The GDI libraries/APIs that provide that are all still there, you just need to find a framework that lets you see them, are kick through the abstraction walls of [insert chosen app framework] to access them more manually. Be prepared for a bit of extra work on what more modern UI libraries make more automatic, and having to discuss everything rather than just what you want to differ from default.

markus_zhang

9 days ago

Oh thanks, I always think what is there is the native. I don't realize the old graphics way is still there. Maybe the Win3.x style is still there too?

saratogacx

9 days ago

I think you can get back to Win9x/2k style controls by instructing the system to not add any theming. If you're finding a panel that is using 3.x controls, they're likely in the resources of the app/dll. Although the 3.x file picker can still be found in a couple of rare corners of the OS.

https://learn.microsoft.com/en-us/windows/win32/api/uxtheme/...

    STAP_ALLOW_NONCLIENT
Specifies that the nonclient areas of application windows will have visual styles applied.

abareplace

8 days ago

If there is no application manifest, you will get Windows NT4 / Windows 9x style buttons. Just tested this on Windows 11.

Taniwha

8 days ago

Vaxes also had hardware support for ASTs in VMS (unlike NT) - they were essentially software interrupts that only triggered when the CPU was in a process context and no enabled interrupts were pending - so you could set a bit in a mask in another thread's context that would get loaded automatically on context switch and triggered once the thread was runnable in user mode .... device drivers could trigger a similar mechanism in kernel mode (and the 2 intermediate hardware modes/rings). There were also atomic queue instructions that would dispatch waiting ASTs

ssrc

8 days ago

Months ago I found this presentation on youtube, "Re-architecting SWIS for X86-64"[0], about how VMS was ported from VAX to Alpha to Itanium to x86 that did not have the same AST behaviour.

[0] https://www.youtube.com/watch?v=U8kcfvJ1Iec

jdougan

8 days ago

Especially since there was, apparently, MICA code copy pasted verbatim in NT.

https://www.techmonitor.ai/technology/dec_forced_microsoft_i...

I was wondering for years why MS continued to support DEC Alpha CPUs with NT.

rasz

8 days ago

Didnt it end brilliantly for MS? Settlement involved MS supporting Alpha while DEC trained its enormous sales/engineering arm to sell and support NT thus killing any incentives to buy DEC hw in the first place. DEC moved upstream the value chain and Microsoft moved tons of NT to all existing DEC corporate customers.

jdougan

7 days ago

Oh yes. MS took full advantage of the situation. MS was always better at strategic planning than DEC.

PaulDavisThe1st

9 days ago

> The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.

I am not sure what point you are attempting to make here. As written, it is more or less completely wrong.

NT and Unix kernels both execute threads. Threads can be created in a few microseconds. Processes are heavy weight on both NT and Unix kernels.

The only thing I can think of is the long-standing idea that Unix tends to encourage creating new processes and Windows-related OS kernels tend to encourage creating new threads. This is not false - process creation on Windows-related OS kernels is an extremely heavyweight process, certainly comparing it with any Unix. But it doesn't make the quote from you above correct.

On a separate note, the state of things at the point of creation of NT is really of very little interest other than than to computer historians. It has been more than 30 years, and every still-available Unix and presumably NT have continued to evolve since then. Linux has dozens to hundreds of design features in it that did not exist in any Unix when NT was released (and did not exist in NT either).

epcoa

9 days ago

Processes and threads on NT are distinct nominative types of objects (in a system where “object” has a much more precise meaning) and the GP is at least correct that the former are not schedulable entities. This distinction doesn’t really exist on Linux for instance where there are at one approximation on the user side only processes (at least to use the verbiage of the clone syscall - look elsewhere and they’re threads in part due to having to support pthreads), and the scheduler schedules “tasks” (task_struct) (whereas in NT the “thread” nomenclature carries throughout). FreeBSD may have separate thread and proc internally but this is more an implementation detail. I guess this all to say at the level lower than an API like pthreads, process/thread really isn’t easily comparable between NT and most Unixes.

It’s not so much “heavyweight” vs “lightweight” but that NT has been by design more limited in how you can create new virtual memory spaces.

For better or worse NT tied the creation of VM spaces to this relatively expensive object to create which has made emulating Unix like behavior historically a pain in the ass.

PaulDavisThe1st

9 days ago

pthreads is a user-space API, and has nothing to do with the kernel. It is possible to implement pthreads entirely in user space (though somewhat horrific to do so). Linux does not have kernel threads in order to support pthreads (though they help).

Anyway, I see your point about the bleed between the different semantics of a task_t in the linux kernel.

epcoa

9 days ago

> Linux does not have kernel threads in order to support pthreads

Yes, what I was alluding to somewhat cryptically was things like tgids and the related tkill/tgkill syscalls that as far I am aware were added with the implementation of 1:1 pthread support in mind.

mananaysiempre

9 days ago

> io_uring would be the first true non-blocking async I/O implementation on Unices.

I would agree with that statement in isolation, except by that standard (no need for a syscall per operation) the first “true” asynchronous I/O API on NT would be I/O ring (2021), a very close copy of io_uring. (Registered I/O, introduced in Windows 8, does not qualify because it only works on sockets.) The original NT API is absolutely in the same class as the FreeBSD and Solaris ones, it’s just that Linux didn’t have a satisfactory one for a long long time.

nullindividual

9 days ago

POSIX AIO is not non-blocking async I/O; it can block other threads requesting the resource. IOCP is a true non-blocking async I/O. IOCP also extends to all forms of I/O (file, TCP socket, network, mail slot, pipes, etc.) instead of a particular type.

POSIX AIO has usability problems also outlined in the previously linked thread.

Remember, all I/O in NT is async at the kernel level. It's not a "bolt-on".

IoRing is limited to file reads, unlike io_uring.

dataflow

8 days ago

> IOCP is a true non-blocking async I/O.

Unfortunately that's only half-true. You can (and will) still get blockage sometimes with IOCP, it depends on a lot of factors, like how loaded the system is, I think. There is absolutely no guarantee that your I/O will actually occur asynchronously, only that you will be notified of its completion asynchronously.

Also, opening a file is also always synchronous, which is quite annoying if you're trying not to block e.g. a UI thread.

The implication of both of these is you still need dedicated I/O threads. I love IOCP as much as anyone, but it does have these flaws, and was very much designed to be used from multiple threads.

The only workaround I'm aware of was User-Mode Scheduling, which effectively notified you as soon as your thread got blocked, but it still required multiple logical threads, and Microsoft removed support for it in Windows 11.

Sesse__

9 days ago

> Remember, all I/O in NT is async at the kernel level. It's not a "bolt-on".

All I/O in Linux is also async at the kernel level! The problem has always been expressing that asynchronicity to userspace in a sane way.

netbsdusers

9 days ago

Filesystem io (and probably more) is not async at the kernel level in Linux. (Just imagine trying to express the complexity of it in continuations or some sort od state machine!) As such io_uring takes the form of a kernel thread pool. Disk block io by contrast is much easier to be fundamentally async since its almost always a case of submitting a request to an HBA and waiting for an interrupt.

mananaysiempre

8 days ago

> Just imagine trying to express the complexity of it in continuations or some sort o[f] state machine!

You’d probably want to use either some sort of code generation to do the requisite CPS transform[1] or the Duff’s-device-like preprocessor trick[2], but it’s definitely doable with some compiler support. Not in an existing codebase, though.

(Brought to you by working on a C codebase that does express stuff like this as explicit callbacks and context structures. Ugh.)

[1] https://dx.doi.org/10.1007/s10990-012-9084-5, https://www.irif.fr/~jch/research/cpc-2012.pdf, https://github.com/kerneis/cpc

[2] https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html

wmf

8 days ago

Just imagine trying to express the complexity of [a filesystem] in continuations or some sort of state machine!

Arguably asyc/await could help with this; obviously it didn't exist in 1991 when Linux was created but it would be interesting to revisit this topic.

SoothingSorbet

8 days ago

> Arguably asyc/await could help with this; obviously it didn't exist in 1991 when Linux was created

Wouldn't that just consist of I/O operations returning futures and then having an await() block the calling thread until the future is done (i.e. put it on a waitqueue)?

treyd

8 days ago

> Just imagine trying to express the complexity of it in continuations or some sort od state machine!

With Rust in the kernel this becomes somewhat possible to conceptualize.

jhallenworld

9 days ago

This is all fine, but Window-NT file access is still slow compared with Linux- this shows up in shell scripts. The reason is supposedly that it insists on syncing during close, or maybe waiting for all closed files to sync before allowing the process to terminate. Shouldn't close finality be an optional async event or something?

nullindividual

9 days ago

The reason is due to file system filters, of which Windows Defender is always there. There is a significant delay from Defender when performing CloseFile()[0].

> As I was looking at the raw system calls related to I/O, something immediately popped out: CloseFile() operations were frequently taking 1-5 milliseconds whereas other operations like opening, reading, and writing files only took 1-5 microseconds. That's a 1000x difference!

This is why DevDrive was introduced[1]. You can either have Defender operate in async mode (default) or remove it entirely from the volume at your own risk.

The performance issue isn't related to sync or async I/O.

[0] https://gregoryszorc.com/blog/2015/10/22/append-i/o-performa...

[1] https://devblogs.microsoft.com/visualstudio/devdrive/

cyberax

9 days ago

Windows FS stack is still _way_ slower than Linux. Filesystem operations have to create IRPs and submit them for execution through a generic mechanism. These IRPs can get filtered and modified in-flight, providing quite a bit of overall flexibility.

In Linux, filesystem paths are super-optimized, with all the filtering (e.g. for SELinux) special-cased if needed.

But even still, Windows also had to cheat to avoid completely cratering the performance, there's a shortcut called "FastIO": https://learn.microsoft.com/en-us/windows-hardware/drivers/i...

I wrote a filesystem for Windows around 25 years ago, and I still remember how I implemented all the required prototypes and everything in Explorer worked. But notepad.exe was just showing me empty data. It took me several days to find a note tucked into MSDN that you need to implement FastIO for memory mapped files to work (which Notepad.exe used).

nullindividual

8 days ago

But it simply isn't slower than Linux.

Robert Collins explains that performance is just as good as Linux and the performance loss on Windows is due to file system filters (Defender)[0].

This is what DevDrive intends (and does) fix.

[0] https://youtu.be/qbKGw8MQ0i8?t=1759

cyberax

8 days ago

The last time I did FS tests was admittedly around 4 years ago, but back then Windows was several times slower on pure FS performance benchmarks (creating/listing/deleting large number of directories and files).

It used to be _much_ slower, like orders of magnitude slower, especially for directories with a large number of files.

nullindividual

8 days ago

You're not seeing "pure" FS performance. You're seeing all of the abstractions between you and the file system.

To get more of the abstractions out of the way, you want DevDrive. And don't use Explorer.exe as a test bed which has shims and hooks and god knows what else.

cyberax

8 days ago

I was testing the performance using plain C++ code.

SoothingSorbet

8 days ago

That's interesting, why would notepad.exe use mmapped files?

cyberax

8 days ago

Probably to save space when opening large files?

torginus

8 days ago

This probably has a lot to do with the mandatory file locking on Windows - afaik on Windows, the file is the representation of the underlying data of the disk, unlike on Linux, where it's just a reference to the inode, so locking the file on open is necessary. This is why you always get those 'file in use and cannot be deleted' prompts.

This impacts performance particularly when working with a ton of tiny files (like git does).

user

9 days ago

[deleted]

mananaysiempre

9 days ago

POSIX AIO + FreeBSD kqueue or Solaris ports are functionally equivalent to IOCP as far as I can tell.

nullindividual

9 days ago

trentnelson

9 days ago

I should do an updated version of that deck with io_uring and sans the PyParallel element. I still think it’s a good resource for depicting the differences in I/O between NT & UNIX.

And yeah, IOCP has implicit awareness of concurrency, and can schedule optimal threads to service a port automatically. There hasn’t been a way to do that on UNIX until io_uring.

nullindividual

9 days ago

Yes, please! And if you're interested, RegisteredIO and I assume you'd drop in IoRing.

In a nicely wrapped PDF :-)

trentnelson

9 days ago

Yeah I’d definitely include RegisteredIO and IoRing. When I was interviewing at Microsoft a few years back, I was actually interviewed by the chap that wrote RegisteredIO! Thought that was neat.

user

9 days ago

[deleted]

netbsdusers

9 days ago

Posix AIO is just an interface. Windows also relies on thread pools for some async io (I.e. when reading files when all the data necessary to generate a disk request isn't in cache - good luck writing that as purely asynchronous)

trentnelson

9 days ago

None of the UNIXes have the notion of WriteFile with an OVERLAPPED structure, that’s the key to NT’s asynchronous I/O.

Nor do they have anything like IOCP, where the kernel is aware of the number of threads servicing a completion port, and can make sure you only have as many threads running as there are underlying cores, avoiding context switches. If you write your programs to leverage these facilities (which are very unique to NT), you can max perform your hardware very nicely.

a-dub

8 days ago

notably the NT equivalent of select(): WaitForSingleObject and WaitForMultipleObjects had a benefit that one select/wait type syscall could be tickled by any of a network, file or the NT equivalent of a pthread signal.

user

9 days ago

[deleted]

formerly_proven

9 days ago

> There's a large debate whether a 'hybrid' kernel is an actual thing, and/or whether NT is just a monolithic kernel.

I don't think it's a concept that meaningfully exists. Microkernels are primarily concerned with isolating non-executive functions (e.g. device drivers) for stability and/or security (POLA) reasons. NT achieves virtually none of that (see e.g. Crowdstrike). The fact that Windows ships a thin user-mode syscall shim which largely consists of thin-to-nonexistent wrappers of NtXXX functions is architecturally uninteresting at best. Arguably binfmt_misc would then also make Linux a hybrid kernel.

hernandipietro

8 days ago

Originally, Windows NT 3.x was more "microkernelithic" as graphics and printer drivers where isolated. NT 4 moved them to Kernel mode to speedup the system.

delta_p_delta_x

9 days ago

> The NT kernel doesn't execute processes, it executes _threads_

This is amongst the most important and visible differences between NT and Unix-likes, really. The key idea is that processes manage threads. Pavel Yosifovich in Windows 10 System Internals Part I puts it succinctly:

  A process is a containment and management object that represents a running instance of a program. The term “process runs” which is used fairly often, is inaccurate. Processes don’t run - processes manage. Threads are the ones that execute code and technically run.
NtCreateProcess is extremely expensive and its direct use strongly discouraged (but Cygwin and MSYS2, in their IMO misguided intention to force Unix paradigms onto Windows, wrote fork() anyway), but thread creation and management is extremely straightforward, and the Windows threading API is as a result much nicer than pthreads.

PaulDavisThe1st

9 days ago

It is hard to accept that this is written by someone with any idea about how Linux works (as a Unix).

A process (really, a "task") is a containment and management object that represents a running instance of a program. A program ("task") does not run, its threads do.

The significant difference between Windows-related OS kernels and Unix-y ones is that process creation is much more heavyweight on the former. Nevertheless, on both types of systems, it is threads that execute code and technically run.

immibis

9 days ago

This was written about Windows kernels.

Linux is the only Unix-like kernel I actually know anything about. In Linux, processes essentially do not exist. You have threads, and thread groups. A thread group is what most of the user-space tooling calls a process. It doesn't do very much by itself. As the name implies, it mostly just groups threads together under one identifier.

Linux threads and "processes" are both created using the "clone" system call, which allows the caller to specify how much state the new thread shares with the old thread. Share almost everything, and you have a "thread". Share almost nothing, and you have a "process". But the kernel treats them the same.

By contrast, processes in NT are real data structures that hold all kinds of attributes, none of which is a running piece of code, since that's still handled by a thread in both designs.

ithkuil

9 days ago

IIRC indeed Linux preserves the time honoured Unix semantics of a process ID by leveraging the thread group ID

delta_p_delta_x

9 days ago

If you're splitting hairs, you're correct; processes manage threads on all OSs.

However, from the application programmer's perspective, the convention on Unix-likes (which is what really matters) is to fork and pipe between processes as IPC, whereas on Windows this is not the case. Clearly the process start-up time on Unix-likes is considered fast enough that parallelism on Unix until fairly recently was based on spinning up tens to hundreds of processes and IPC-ing between them.

I believe the point stands.

PaulDavisThe1st

9 days ago

For a certain kind of application programming, that is and was true, yes.

But not for many other kinds of application programming, where you create threads using pthreads or some similar API, which are mapped 1:1 onto kernel threads that collectively form a "process".

I'm not sure what your definition of "fairly recently" is, but in the mid-90s, when we wanted to test new SMP systems, we would typically write code that used pthreads for parallelism. The fact that there is indeed a story about process-level parallelism (with IPC) in Unix-y systems should not distract from the equally fact existence and use of thread-level parallelism for at least 35 years.

torginus

8 days ago

My knowledge might be very out of date, but I remember a Linux process being an unit of execution as well as isolation. Creating a process without a thread is not possible afaik.

In contrast, Linux threads were implemented essentially as a hack - they were processes that shared memory and resources with their parent process, and were referred to internally as LWPs - lightweight processes.

I also remember a lot of Unix/Linux people not liking the idea of multithreading, preferring multiple processes to one, single-threaded process.

netbsdusers

9 days ago

All kernels execute threads. It's just that very old unixes had a unity of thread and process (and Linux having emulated that later introduced an unprecedented solution to bring in support for posix threads). The other unixes for their part all have a typical process and threads distinction today and have had for a while.

lr1970

9 days ago

> It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler.

WNT = VMS + 1 (next letter in alphabet for all three)

amatwl

9 days ago

For the record, NT comes from the codename for the Intel i860 (N10) which was the original target platform for NT.

mannyv

8 days ago

NT used to mean "New Technology," if I remember correctly. Not sure if that was the internal codename or a marketing creation anymore.

user

8 days ago

[deleted]

Dwedit

9 days ago

Threads aren't created in milliseconds, that would be really slow. It's more like microseconds.

nullindividual

9 days ago

Typo, thanks for the correction. Too late to edit :-)

slt2021

9 days ago

Process is a way to segregate resources (memory, sockets, file descriptors, etc). You kill a proc - it will release all memory and file descriptors.

Thread is a way to segregate computation. You spawn a thread and it will run some code scheduled by the OS. you kill/stop a thread and it will stop computation, but not the resources.

user

9 days ago

[deleted]

TomMasz

7 days ago

Coming from working with Vax/VMS to NT I was struck by the similarities between them. It was only years later I found out about NT's DEC connections.

qsdf38100

9 days ago

WNT is VMS+1

V->W

N->M

S->T

nullindividual

9 days ago

> https://www.youtube.com/watch?v=xi1Lq79mLeE&t=4314s

"New Technology", but yes, it's funny

JeremyNT

9 days ago

One of my very favorite facts about Windows 2000, as revealed in its boot screen, is that it's based on New Technology Technology.

(I no longer work with Windows very much, but this little bit of trivia has stuck with me over the years)

steve1977

9 days ago

Initially actually (afaik) it stood for N10, for the Intel i860 CPU. I think “New Technology” came from marketing then.

nullindividual

9 days ago

If you watch the interview with David Cutler to the time code I linked to, he explains that NT stands for New Technology which marketing did not want.

lproven

7 days ago

In which he specifically says that there may have been some point, early on, where it stood for N-10.

I just watched it. He does not say what you are maintaining he says.

queuebert

8 days ago

In true Windows form, you have a memory access error.

pmontra

9 days ago

Actually

M->N

qsdf38100

8 days ago

Nice catch. Too late to edit unfortunately.

revskill

8 days ago

Could you please explain on those characters ? What do they mean ? Thnks.

fredoralive

8 days ago

VMS[1] is an OS for VAX[2] systems by Digital that Dave Cutler worked on before Windows NT (with the abandoned MICA OS for the equally abandoned PRISM CPU architecture between the two). As people have noted, the NT kernel is rather VMS / MICA like, because it's written by some of the same people, so they're solving problems with things they know work (with some people suggesting directly copied code as well, although VMS and MICA didn't use C as their main programming languages).

Some people point out if you shift characters by one "VMS" becomes "WNT", and give it as an explanation of the name choice of Windows NT, but it's a coincidence. For one thing, nobody ever explains how this gag was going to work back when the project was "NT OS/2"[3].

[1] Virtual Memory System, originally VAX/VMS, later OpenVMS.

[2] Later DEC Alpha, Intel Itanic and now AMD64 systems.

[3] AKA OS/2 3.0 or Portable OS/2.

kev009

9 days ago

This is great! It would be interesting to see darwin/macos in the mix.

On the philosophical side, one thing to consider is that NT is in effect a third system and therefore avoided some of the proverbial second system syndrome.. Cutler had been instrumental in building at least two prior operating systems (including the anti-UNIX.. VMS) and Microsoft was keen to divorce itself from OS/2.

With the benefit of hindsight and to clear some misconceptions, OS/2 was actually a nice system but was somewhat doomed both technically and organizationally. Technically, it solved the wrong problem.. it occupies a basically unwanted niche above DOS and below multiuser systems like UNIX and NT.. the same niche that BeOS and classic Mac OS occupied. Organizationally/politically, for a short period it /was/ a "better DOS than DOS and better Windows than Windows" with VM86 and Win API support, but as soon as Microsoft reclaimed their clown car of APIs and apps it would forever be playing second fiddle and IBM management never acknowledged this reality. And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.

cturner

8 days ago

Be is much more like NT than OS/2 or classic Mac OS. Like NT: kernel written in C, portable, emphasis on multi-threading, robust against subsystem failures, shipped with a TCP/IP stack in the base OS. Be booted straight into the desktop but you could download software to give it a logon screen like NT4 and different users - the structures were already in place.

The current crop of operating systems may themselves be a temporary niche. The design of NT and unix are awash with single-host assumptions yet most use-cases are now networked. Consider the way that linux filesystem handles are specific to the host they are running on, rather than the grid of computers they run in. Yet we run word-processors in browsers, ssh to other host to dispatch jobs.

There is a gap for a system which has an API that feels like an operating system API, but which sits on top of a grid of computers, rather than a single host. The kernel/keeper acts as a resource-manager for CPUs and memory in the grid. Such systems exist in sophisticated companies but not in the mainstream. Apache Yarn is an example of a system headed in that direction.

Once such a system becomes mainstream, you don't need the kind of complex operating systems we have now. A viable OS would do far less - coordinate drivers, scheduler, TCP/IP stack.

kev009

7 days ago

Yeah, we cannot see very far into the future and betting that system software will remain stagnate is not a good bet. If that means wholesale replacement, few of us will understand it at first. Otherwise retrofits will remain the norm for example one old idea that seems to be on the up and up is hardware-enforced capability systems and those working on them have found plausible ways to retain most of the existing C ABI world. Whether it always remains a good idea to continue propping up the past is one of the hardest questions and ultimately will be more of an economic one.

There are obsolete systems that meet your criteria, Plan 9 and Apollo Domain/OS had plausible early answers for networked computing that haven't been adequately explored.

For your last point, I would say scheduling, security, namespacing are the core features and everything else can be layered (perhaps interchangeably) on top. As a practical example, even in monolithic FreeBSD it supports fully runtime swappable TCP/IP stacks and out of tree network stacks.. the core illusion for that system is retaining the same Socket interface for userspace. Presumably if you get the namespacing thing right the entire system and network of things would look a lot more like deliberately interchangeable parts.

cturner

7 days ago

Had not heard of Apollo, an interesting read. I am building a 9P library at the moment towards some projects in this spirit.

You make a good case for base features in your last paragraph. I think monolith drivers as well. Raw speed seems to be important for survival in computing platforms. I am far from a domain expert but as far as I know, monolith drivers have efficiency/speed advantages over microkernel.

hnlmorg

9 days ago

That “niche” you described was actually a desktop computing norm form more than a decade.

Let’s also not forget RISC OS, Atari TOS, AmigaOS, GEOS, SkyOS and numerous DOS frontends including, but not limited to, Microsoft Windows.

kev009

9 days ago

I believe we are obliquely agreeing.

To further my thoughts a bit, the distinction I would place on OS/2 over DOS is double: 1) first class memory protection 2) full preemptive multitasking. The distinction I would place against all widely used modern desktop OSes is the lack of first class multi-user support.

Early Windows gains cooperative multi-tasking like Mac OS classic but neither fundamentally use memory protection the way later OSes normalize and that turns out to be a pretty clear dead end. I believe there are extensions for both that retrofit protection in. Both also have some add on approaches to multi-user but are not first class designs.

So, even the most robust single user system that implements full memory protection and preemptive multitasking still seems to be stuck in a valley. I.e. whatever the actual cost in terms of implementation and increase in cognitive workload for single user systems (i.e. "enter your administrative password" prompts on current macOS or Windows administrative accept dialogs) seems to be accepted by the masses.

And note that this isn't an implicit a negative judgement, for instance I find BeOS or those in your list can be absolutely fascinating. And OS/2 is lovely even today for certain niche or retrocomputing things. Just pointing out that NT made a better bet for the long term, and some of that undoubtedly related to the difference of a couple years.. keep in mind OS/2 ran on a 286, which NT completely bypassed.

nullindividual

9 days ago

Windows 2.0 for i386 was the first to introduce protected mode and preemptive multitasking. These features had to wait for Intel but it was available in 1987.

kev009

9 days ago

In a limited sense, yes, Windows (without NT) increasingly /used/ memory protection hardware over its life but never in a holistic approach as we typically understand today to create a TCB.

I don't believe Windows 2.0 implemented preemptive tasking, can you show a reference so I can learn?

fredoralive

8 days ago

AIUI Windows/386 2.x is basically a pre-emptive V86-mode DOS multitasker that happens to be running Windows as one of its tasks. So the GUI itself is cooperative, but between its VM and DOS VMs it can pre-empt.

(Windows 3.x 386 mode is similar, with Windows 9x stuff in the Windows VM can pre-emptively multitask, mostly).

kev009

8 days ago

Thanks that explanation makes sense to me!

user

8 days ago

[deleted]

jazzypants

8 days ago

I'm not that guy, but the Wikipedia article[1] is a decent jumping-off point, but I also found this blog article[2] talking about the different versions-- although it seems to get a couple things wrong according to discussion about it on lobste.rs[3] Finally, this long article from Another Boring Topic [4] includes several great sources.

1 - https://en.m.wikipedia.org/wiki/Windows_2.0#Release_versions

2 - https://liam-on-linux.livejournal.com/78006.html

3 - https://lobste.rs/s/4xfswa/what_was_difference_between_windo...

4 - https://anotherboringtopic.substack.com/p/the-rise-of-micros...

lproven

7 days ago

That's my blog post -- thanks for the link.

The Lobsters discussion was very interesting. Some people in there were extremely knowledgeable, but also very confrontational. Coming straight in going "that's all wrong!" is not a productive way to start a discussion IMHO.

I didn't get any very firm conclusions from it. There are some very knowledgeable people out there on the WWW claiming, with convincing arguments, that Windows/286 couldn't run in 286 mode and only gave you 640kB ("conventional memory" in DOS terminology)+ 64kB HMA.

Others, including from Microsoft, say no, Windows/286 did run in protect mode and could access 16MB of (segmented) RAM.

I do not know for sure who is right.

devbent

8 days ago

Windows has had multi-user support for ages.

That is how user switching in XP works, and how RDP works. You can have an arbitrary number of sessions of logged in users at once, only limited by the license for what version of Windows you have installed.

There have also been versions of Windows that allow multiple users to interact with each other at once, but I believe these have all been cancelled and I do not know to what extent these simultaneous users had their own accounts.

kev009

8 days ago

Well of course, Windows XP is a direct decedent of Windows NT. Maybe you are referring to Citrix or Terminal Server, which are also Windows NT technologies.

amaccuish

8 days ago

Windows NT was multi-user from the start. The only change was the session object added in vista.

nullindividual

9 days ago

Isn't Windows more abstract than just a frontend[0]? I.e., Windows/386 added protected mode and other features.

[0] https://en.wikipedia.org/wiki/Microsoft_Windows#Early_versio...

hnlmorg

9 days ago

To be honest I was being flippant with that Windows remark but you’re right to call me out for it.

Microsoft get a lot of stick but Windows of the 90s do bring a lot to table. And by 9x DOS was basically a boot loader.

torginus

8 days ago

Yeah, this was a common misconception - due to the fact you could boot into Windows (up to 95) from DOS, people assumed it was just a frontend program.

hnlmorg

8 days ago

Not just up to 95, you could boot into Windows ME from DOS too.

But I agree that DOS was effectively just a bootloader for the 9x era of Windows. I was just being flippant in my previous post however you guys are right to call me on it.

hypercube33

9 days ago

How dare you ignore BeOS which isn't Windows or Nix

hnlmorg

8 days ago

It had already been mentioned and the point of my post was to give other examples

nullindividual

9 days ago

> And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.

I think when it comes to this it is best to remember the home computing landscape of the time, and the most important part: DRAM prices.

They were absurdly high and NT4/2000 required more of it.

My assumption is Microsoft would have made the NT4/2000 jump much quicker if DRAM prices were driven in a downward direction.

hnlmorg

9 days ago

DOS compatibility was a far bigger issue.

PC gaming was, back then, still very DOS-centric. It wasn’t until the late 90s that games started to target Windows. And even then, people still had older games they wanted supported.

dspillett

9 days ago

I'd say both. IIRC the DOS story was better under OS/2 than NT, but the RAM requirements were higher (at least until XP).

To add a third prong: hardware support was a big issue too as it is for any consumer OS, with legacy hardware being an issue just as it can be today if not more so. This hit both NT and OS/2 similarly.

hypercube33

9 days ago

NT 4 had NTVDM and it worked well enough. Quake 1, command and conquer, sim games for DOS and a bunch of other stuff worked just fine. You'd run into issues with timing on crappier games or some games that talked directly to soundcards I forget what the details were but you'd just not have audio.

nikau

8 days ago

I had a gravis ultrasound at the time and remember having to cut the reset signal line on the card.

I could then initialise the card in DOS and reboot into NT without it being reset and losing settings. Then some sketchy modified driver was able to use it.

cyberax

9 days ago

I remember NT4 having problems with games that wanted to access SVGA resolutions and SoundBlaster. I kept a volume with Win98 back then specifically for the games.

euroderf

9 days ago

I recall DRAM prices restricting the uptake of OS/2 also.

steeeeeve

8 days ago

I remember talking to an IBM rep when he came to CompUSA to demo OS/2.

I asked him why I should pick OS/2 over NT and his response was "honestly, if you can afford it, NT is better"

At the time, we were selling OS/2 for $99 and NT for $499 IIRC. (And I was able to use a store copy of both to take home and use, which was awesome)

OS/2s interface was better IMO.

kev009

9 days ago

Definitely impactful for NT4, not for 2000.

p_l

9 days ago

Was very impactful in early days of 2000. Seeing 64 MiB "used up" by barely loaded NT5.0 beta/RC was honestly a sort of chilling effect. But prices shortly fell and 128MB became accessible option, just in time for Windows XP to nail Windows 9x dead

hnlmorg

9 days ago

128MB was pretty common by the time Windows 2000 was released. I could afford it and I wasn’t paid well at that time.

Plus Windows ME wasn’t exactly nimble either. People talk about the disaster that was Vista and Windows 8 but ME was a thousand times worse. Thank god Windows 2000 was an option.

p_l

8 days ago

It got much better once the release date rolled in, but it was something I remember discussed a lot at the time among those who did use NT.

Also, at the time NT was still somewhat limited in official settings to pretty much rare more expensive places and engineering systems, with typical business user keeping to Windows 98 on "client" and NT Server domain controllers or completely different vendors, at least outside big corporations with fat chequebooks. Active Directory started changing it a lot faster, the benefits wer great and 2000 having hotplug et al was great improvement, but it took until XP for the typical office computer to really run on NT in my experience.

hnlmorg

8 days ago

Let’s not forget that XP had twice the system requirements that 2000 did.

I think the main difference is really just that XP had a “Home” edition that came preinstalled on home PCs.

There’s no reason Microsoft couldn’t do that with 2000, and in fact I read that some stores did stock Windows 2000 instead of Windows Me (though this wasn’t something I personally witnessed).

p_l

8 days ago

I'm not talking about environments that could run Home Edition - one of the things MS disabled in Home was ability to join AD domains, something that was available on 9x.

From what I recall, an important point for releasing ME at all was that compatibility (both in applications and drivers) was not yet ready at the time of NT 5.0 going gold, leading to a stopgap solution being deployed.

hnlmorg

7 days ago

In my experience, offices did run Windows 2000. I also knew offices that ran NT4 too.

I guess it depends on the office though. At risk of stating the obvious: Different organisations will do things differently.

p_l

7 days ago

NT4 was reasonably rarer due to hardware compatibility issues as well as software compatibility. Windows 2000 brought great change there, but it only really finished with the very soon after XP.

The organizations that really needed NT for various reasons (engineering software, SMP, higher security settings, etc.) did move as early as NT4, indeed. Also, non-trivial amount of Alpha workstations with NT4 (as well as Pentium Pro ones)

somat

9 days ago

My understanding is that ME from a technological point of view was 98 with NT drivers. It probably was a critical step in getting vendors to make NT drivers for all of their screwball consumer hardware, and this made XP, the "move the consumers to the NT kernel" step a success. The lack of drivers is also what made XP 64 bit edition so fraught with peril, but xp-64/vista was probably critical for win7's success for the same reason.

But yeah, what a turd of a system.

mepian

9 days ago

98 was the one that introduced NT drivers (WDM).

hypercube33

9 days ago

Didn't it still use VXD drivers for a lot of stuff though?

cyberax

9 days ago

Yes, because WDM support was limited. It only allowed synchronous requests and was really only suitable for USB or storage drivers.

nullindividual

9 days ago

It was roughly $1/MB in 1999. Or about $250 USD with inflation in 2024 dollars for a 128MB DIMM.

netbsdusers

9 days ago

The second system for Cutler was really Mica - he discusses its outrageous scope in his recent interview with Dave Plummer.

markus_zhang

9 days ago

Dave Cutler is really someone I look up to and wish I could be (but could never be due to numerous reasons). I strongly resonate with what he said in "Showstopper":

  What I really wanted to do was work on computers, not apply them to problems.
And he sticks to it for half of a century.

tivert

9 days ago

> This is great! It would be interesting to see darwin/macos in the mix.

But that's just another UNIX.

kev009

9 days ago

Only in the user's perception. The implementation is nothing like UNIX, being a Mach2.5 derivative and later additions like DriverKit.

steve1977

9 days ago

macOS is a proper UNIX. As was OSF/1 (aka Digital UNIX aka Tru64 UNIX), which also had Mach 2.5 kernel.

kev009

9 days ago

So is z/OS. UNIX branding under The Open Group is correctly focused on the user's perception and has little to do with kernel implementation. Mach is is no more UNIX than NT is VMS, to call one the other in this context of kernel discussion is reductionist and impedes correct understanding of the historical roots and critical differences.

p_l

9 days ago

OSF/1 however was a complete Unix system, even if it based its kernel on Mach (at least partially because it offered a fast path to SMP and threading), and formed the BSD side of Unix wars.

And NeXTSTEP didn't diverge too much that when OSX was released they updated the code base with last OSFMK release.

icedchai

8 days ago

I'd say SunOS (4.x and earlier, not Solaris) was the BSD side of the Unix wars. For most of the 90's, SunOS was the gold standard for a Unix workstation. I worked at a couple of early Internet providers and the users demanded Sun systems for shell accounts. Anything else was "too weird" and would often have trouble compiling open source software.

kev009

8 days ago

Correct, SunOS is a descendant of BSD and was the most widely used and renowned one during that time. And like you said, it was the gold standard for easy builds of most contemporary software and enthusiastic support.

DEC Ultrix is also BSD kin and IBM AOS and HP-BSD were intentionally vanilla BSDs. There were some commercial BSDs like Mt Xinu and BSDi that were episodically relevant.

BSD proper was alive and well especially in the academic and research circles into the 1990s and we get the current derivatives like Net, Free, and Open which are direct kin.

Mach is regularly BSD-affined because BSD was a typically ported server but Mach is decidedly its own thing (as a simple and drastic counterexample, there was MkLinux and OS/2 for PowerPC which had little to do with BSD but are very much Mach). NeXTSTEP and eventually Darwin/macOS inherit BSD affinity.

icedchai

8 days ago

Ultrix had a “weird” feeling to it. It didn’t even support shared libraries from what I remember.

kev009

8 days ago

It had some heavy hitters behind it, but my understanding is DEC's effort was mired in organizational problems leading to fractured strategy and commitment. Many vendors were still figuring shared libraries out into the early 1990s so it must have been swept away from Ultrix once OSF/1 became the plan of record.

kev009

9 days ago

Again, so too is z/OS a complete UNIX system (from the user perspective).

OSF/1 is decidedly not the the BSD side of the Unix wars, it is its own alternative strand against its contemporaries BSD and System V. More specifically, it took its initial UNIX personality from IBM AIX and was rapidly developed and redefined to accommodate the standards du jour which included BSD and System V APIs.

andrewla

9 days ago

Practically speaking there are a number of developer-facing concerns that are pretty noticeable. I'm primarily a Linux user but I worked in Windows for a long time and have grown to appreciate some of the differences.

For example, the way that command line and globbing works are night and day, and in my mind the Windows approach is far superior. The fact that the shell is expected to do the globbing means that you can really only have one parameter that can expand. Whereas Win32 offers a FindFirstFile/FindNextFile interface that lets command line parameters be expanded at runtime. A missing parameter in unix can cause crazy behavior -- "cp *" but on windows this can just be an error.

On the other hand, the Win32 insistence on wchar_t is a disaster. UTF-16 is ... just awful. The Win32 approach only works if you assume 64k unicode characters; beyond that things go to shit very quickly.

panzi

9 days ago

Hard disagree. The way Windows handles command line parameters is bonkers. It is one string and every program has to escape/parse it themselve. Yes, there is CommandLineToArgvW(), but there is no inverse of this. You need to escape the arguments per hand and can't be sure the program will really interpret them the way you've intended. Even different programs written by Microsoft have different interpretations. See the somewhat recent troubles in Rust: https://github.com/rust-lang/rust/security/advisories/GHSA-q...

SkiFire13

9 days ago

> See the somewhat recent troubles in Rust: https://github.com/rust-lang/rust/security/advisories/GHSA-q...

FYI this started out as a vulnerability in yt-dlp [1]. Later it was found to impact many other languages [2]. Rust, along with other languages, also considered it a vulnerability to fix, while some other languages only updated the documentation or considered it as wontfix.

[1]: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-42...

[2]: https://flatt.tech/research/posts/batbadbut-you-cant-securel...

andrewla

9 days ago

Question of where the pain goes, I guess. In unix, having to deal with shell escaping when doing routine tasks is super annoying -- URLs with question marks and ampersands screwing everything up, and simple expansions (like the `cp *` example above) causing confusing.

Yes, Windows resolution and glob expansion can be inconsistent, but it usually isn't, but Unix makes you eat the cruft every time you use it. And you still get tools like ImageMagick that have strange ad hoc syntax for wildcard because they can't use the standard wildcarding, or even ancient tools like find that force you to do all sorts of stupid shit to be compatible with globbing.

chasil

9 days ago

As I understand it, CMD.EXE came from OS/2 and has had many revisions that allow more pervasive evaluation of variables (originally, they were expanded only once, at the beginning of a script).

The .BAT/.CMD to build the Windows kernel must have originally been quite the kludge.

Including pdksh in the original Windows NT might have been a better move.

https://blog.nullspace.io/batch.html

pmontra

9 days ago

I quote URLs and "file names" with double quotes in Linux bash much like I quote "Program Files" in Windows cmd. It's the same. I quote spaces\ with\ a\ backslash\ sometimes.

pcwalton

9 days ago

It's really strange to me that Microsoft has never added an ArgvToCommandLineW(). This would solve most of the problems with Windows command line parsing.

pjmlp

8 days ago

It was common in MS-DOS compilers, in Windows largely ignored, because GUI is the main way of using the system, not always being stuck in CLI world.

panzi

8 days ago

You still need to e.g. pass file paths to a program to open that file in that program.

pjmlp

7 days ago

Which can be achieved in various ways in modern OS IPC, not only command line.

panzi

8 days ago

PS: Somewhat related, look at the bonkers way Windows resolves the program to execute from the command line string: https://learn.microsoft.com/en-us/windows/win32/api/processt...

PPS: Even less related, there's no exec() in Win32, so you can't write a starter program that does some setup and then runs another program (e.g. specified as a parameter). You need to keep the starter program open and act as a proxy to the started program! See the various dotenv CLI implementations. But I digress.

Dwedit

9 days ago

The wchar_t thing is made much worse by disagreements on what type that actually is. On Win32, it's a 16-bit type, guaranteed to be UTF-16 code points (or surrogate pairs). But on some other compilers and operating systems, wchar_t could be a 32-bit type.

Another problem with UTF-16 on Windows is that it does not enforce that surrogate pairs are properly matched. You can have valid filenames or passwords that cannot be encoded in UTF-8. The solution was to create another encoding system called "WTF-8" that allows unmatched surrogate pairs to survive a round trip to and from UTF-16.

benchloftbrunch

9 days ago

WTF-8 barely qualifies as "another encoding system" - it's a trivial superset of UTF-8 that omits the rule forbidding surrogate codes.

Imo that artificial restriction in UTF-8 is the problem.

zzo38computer

7 days ago

I think the problem is believing that one character set or character encoding is suitable for everything, and that it has one definition. Neither is true.

Sometimes the restriction is appropriate, but sometimes a variant without this restriction is appropriate, and sometimes Unicode is not appropriate at all. The "artificial restriction" in UTF-8 is legitimate (since they are not valid Unicode characters) but should not apply for all kinds of uses; the problem is programs that apply them when they shouldn't be applied because of limitations in the design.

I think that using a sequence of bytes as the file name and passwords is better, and that file names and passwords being case sensitive is also better.

However, I think "WTF-8" specifically means that mismatched surrogates can be encoded, in case you want to convert to/from invalid UTF-16. Sometimes you might use a different variant of UTF-8, that can go beyond the Unicode range, or encode null characters without null bytes, etc. Sometimes it is better to use different Unicode encodings, or different non-Unicode encodings (which cannot necessarily be converted to Unicode; don't assume that you can or should convert them), or to care only that it is ASCII (or any extension of ASCII without caring about specific extension it is), or to not care about character encoding at all.

rbanffy

9 days ago

NT was more ambitious from the start, and this might be one of the reasons why it didn’t age so well: the fewer decisions you make, the fewer mistakes you’ll have to carry. GNU/Linux (the kernel, GNU’s libc, and a handful of utilities) is a very simple, very focused OS. It does not concern itself about windows or buttons or mice or touchscreens. Because of that, it’s free to evolve and tend to different needs, some of which we are yet to see. Desktop environments come and go, X came and mostly went, but the core has evolved while keeping itself as lean as technically possible.

andrewla

9 days ago

This is more of a sweeping generalization than I think would be appropriate.

The command line handling as I note above is a really crufty old Unix thing that doesn't make sense now and is confusing and offputting when you get papercuts from it.

Another notable thing that they talk about to an extent is process creation -- the fork/exec model in Linux is basically completely broken in the presence of threads. The number of footguns involved in this process has now grown beyond the ability of a human to understand. The Windows model, while a bit more cumbersome seeming at first, is fully generalizable and very explicit about the handoff of handles between processes.

The file system model I think is mostly a wash -- on the one hand, Window's file locking model means that you can't always delete a file that's open, which can be handy. On the other hand, it means that you can't always delete a file that's open, which can be painful. On Linux, it's possible that trying to support POSIX file system semantics can result in unrecoverable sleeps that are nearly impossible to diagnose or fix.

simoncion

8 days ago

> The command line handling as I note above is a really crufty old Unix thing that doesn't make sense now and is confusing and offputting when you get papercuts from it.

It's a power tool, and one whose regularity and consistent presence is appreciated by many.

If it trips you up, then configure your interactive shell and/or scripts to not glob. Bash has 'set -f', other shells surely have similar switches, and undoubtedly there are shells that do no globbing at all.

If your counterargument to this workaround is that now you definitely can't use "*?[]" and friends, whereas you could maybe do that in some Windows software, my counterargument to that would be that leaving globbing & etc up to the application software not only makes it inconsistent and unreliable, it does nothing to systematically prevent the 'cp *' problem you mentioned above.

andrewla

8 days ago

I mean, I can't turn globbing off -- applications do not do glob expansion in unix (with rare exceptions like `find`), so they just wouldn't work. This is an intrinsic decision in how linux applications are designed to work with command line arguments. All shells need to comply with this if they expect to be able to use command line tools and allow users to use wildcards. There's simply no other choice.

The `cp *` case is annoying in that it will sometimes fail explicitly, but often will work, except that it will do something entirely unexpected, like copy all the files in the directory to some random subdirectory, or overwrite a file with another file. This is unfixable. Files that start with a dash are a minefield.

The windows approach is not without its flaws (quoting is horrible, for example), but on balance I think a little more reasonable.

simoncion

5 days ago

> I mean, I can't turn globbing off -- applications do not do glob expansion in unix...

You can turn globbing off, you just find operating without it to be inconvenient and don't like it.

Just as I find the inconsistency and irregularity that comes from Windows demanding that software do its own glob/wildcard expansion inconvenient and don't like it.

xolve

8 days ago

I agree about threads a lot! Process creation and handling APIs e.g. fork, signals, exec etc. are great when working with single threaded processes and command line, but they have so many caveats when working with threads.

A paper by Microsoft on how viral fork is and why its presence prevents a better process model: https://www.cs.bu.edu/~jappavoo/Resources/Papers/fork-hotos1...

rbanffy

8 days ago

True, starting a process and waiting until all its threads complete is a pain on Linux but I don’t remember it being less painful on Windows (although the last time I tried that on Windows was with Windows 2000).

okanat

9 days ago

Sorry but your argument is baseless. There is nothing in NT kernel that forces a certain UI toolset nor it deals with the UI anymore (it briefly did when PCs were less powerful via GDI API, not anymore). Linux kernel and its modesetting facilities are quite a bit invasive and it is Linux that forces a certain way of implementing GPU drivers.

Windows just requires a set of functions from the driver to implement. Win32 drawing APIs are now completely userspace APIs and Windows actually did implement a couple of different ones.

Browsers switched to using a single graphics API canvas component long ago. Instead of relying on high level OS graphics API, they come with their own direct renderers that's opaque to the OS apart from the buffer requests. This approach can utilize GPUs better. Windows was among the first systems to implement the same as a UI component library. It is called WPF and it is still the backbone of many non-Win32 UI elements on Windows. On the Linux side, I think Qt was the first to implement such a concept with QML / QtQuick which is much later than WPF.

Moreover your argument that Unix evolves better or more freely falls apart when you consider that we had to replace X. On Unix world, the tendency to design "minimal" APIs that are close to hardware is the source of all evil. Almost any new technology requires big refactoring projects from application developers in the Unix world since the system programmers haven't bothered to design future-proof APIs (or they are researchers / hobbyists who don't have a clue about the current business and user needs nor upcoming tech).

Windows didn't need to replace Win32 since it was designed by engineers who understood the needs of businesses and designed an abstract-enough API that can survive things like introduction of HiDPI screens (which is just a "display changed please rerender" event). It is simply a better API.

On the Unix side X was tightly designed as around 96 or 72 DPI screens and everything had to be bolted on or hacked since the APIs were minimal or tightly coupled with the hardware capabilities at the time. Doing direct rendering on X was a pain in the ass and had an intertwined web of silly hacks which was why the DEs in 2010s kept discovering weird synchronization bugs and it was why Wayland needed to be invented.

rbanffy

8 days ago

> There is nothing in NT kernel that forces a certain UI toolset

Unfortunately, Windows the OS and the NT kernel are not completely independent - you can’t really run one without the other.

> we had to replace X

We did so because we wanted to continue to run apps made for it. The fact it was done without any change to the Linux kernel is precisely because the graphical environment is just another process running on top of the OS, and it’s perfectly fine to run Linux without X.

anthk

8 days ago

>Qt...

Cairo and such predate QML for long...

pie_flavor

8 days ago

wchar_t was advanced for its time, Microsoft was an early adopter of Unicode and the ANSI codepage system it replaced was real hell but what almost everyone else was using. UTF-8's dominance is much more recent than Linux users tend to assume - Linux didn't (and in many places still doesn't) support Unicode at all, but an API that passes through ASCII or locale-based ANSI can have its docs changed to say UTF-8 without really being wrong. Outside of the kernel interface, languages used UTF-16 for their string types, like Python and Java. Even for a UTF-8 protocol like HTTP, UTF-16 was assumed better for JS. Only now that it is obvious that UTF-16 is worse (as opposed to just having an air of "legacy"), is Microsoft transitioning to UTF-8 APIs.

zzo38computer

7 days ago

> an API that passes through ASCII or locale-based ANSI can have its docs changed to say UTF-8 without really being wrong

Actually, it can be wrong, and it is not necessarily a good idea to do this anyways (actually, is almost always is not a good idea to do this (just changing the documentation), I think, unless the problem was an error in the original documentation). Sometimes it is better to say that it is ASCII but allows 8-bit characters as well (without caring what they are), or something like that. For font rendering, it will be necessary to be more specific although it might depend on the font as well.

> it is obvious that UTF-16 is worse

UTF-16 is not always worse. It depends both on the program (and what requirements it has for processing the text) and on the language of the text. And then, there is also UTF-32. (And sometimes, Unicode is worse regardless of the encoding.)

torginus

8 days ago

I can see why MS went with UTF-16. As someone who had experience from before that era, and comes from a non-English culture, before UTF-16, most people used crazy codepages and encodings for their stuff, resulting in gobbledygook once something went wrong - and it always did.

If you run with the assumption that all UTF-16 characters are two bytes, you still get something that's usable for 99% of the Earth's population.

devbent

8 days ago

UTF-8 wasn't a thing when the decision to go with UTF-16 was made.

UTF-8 became a thing shortly thereafter and everyone started laughing at MS for having followed the best industry standard that was available to them when they had to make a choice.

wvenable

8 days ago

UTF-16 also didn't exist when the decision was made. It was UCS-2.

Microsoft absolutely made the right decision at the time and really the only decision that could have been made. They didn't have the luxury to ignore internationalization until UTF-8 made it viable for Linux.

yencabulator

8 days ago

Meanwhile a bunch of unix graybeards literally invented UTF-8 on a napkin, and changed the world.

zzo38computer

8 days ago

I think neither Windows nor UNIX have a better way of handling command-line arguments; they both have problems (although I think Windows has some additional problems, even that you did not mention).

I have a different way in my idea of operating systems: One of the forks of a file specifies the type of the initial message that it expects, and the command shell will read this to figure out the types, so that effectively programs have types. This initial message also includes capabilities. This is better than needing to convert everything to/from text and then making a mess (e.g. needing quotation, unexpectedly interpreting file names as switches (even if they are quoted), needing an extra delimiter if the text starts with a minus sign which must then be parsed from the program too, the "crazy behavior" you mention with "cp *", etc).

(Also, my operating system design does not actually have file names nor directory structures.)

(I also think that Unicode is no good, but that is a different issue.)

tracker1

8 days ago

For what it's worth, UTF-8 didn't exist when UTF-16/UCS2 was created. I'm sure Windows, JavaScript and a lot of other things would be very different if UTF-8 came first.

Aside: a bit irksome how left out cp-437 is in a lot of internationalization/character tools. PC-DOS / US was a large base of a lot of software/compatibility.

dspillett

9 days ago

Not even UTF16. Just UCS2 for a long time.

deathanatos

8 days ago

Windows, I believe, is WTF-16. It permits surrogates (e.g., you can stick an emoji in a filename) — thus it cannot be UCS-2. It permits unpaired surrogates — thus it cannot be UTF-16.

jhallenworld

9 days ago

Wchar_t is definitely the biggest annoying difference in that it shows up everywhere in your C/C++ source code.

TillE

9 days ago

It depends on what you're doing, but after many years I've just settled on consistently using UTF-8 internally and converting to UCS-2 at the edges when interacting with Win32.

There's just too much UTF-8 input I also need to take, and converting those to wstring hurts my heart.

mixmastamyk

9 days ago

> command line and globbing

Both Unix and NT are suboptimal here. I believe there was an OS lost to time (and my memory) that had globbing done by the program but using an OS-provided standard library. Probably the best way to do it consistently. That said, having to pick the runner up, I prefer the Unix way. As the unpredictable results happened to me more often on NT than... for example your cp example, which though possible I don't think I've ever done in my career.

The rest of command.com/cmd.exe is so poorly designed as to be laughable, only forgiven for being targeted to the PC 5150, and should have been retired a few years later. Makes sh/bash look like a masterpiece. ;-)

andrewla

9 days ago

Win32 in theory has globbing done by an OS-provided standard library -- the `FindFirstFile` and `FindNextFile` win32 calls process globbing internally, and they are what you are expected to use.

Some applications choose to handle things differently, though. For example, the RENAME builtin does rough pattern-matching; so "REN .jpg .jpg.old" will work pretty much the way that intuition demands that it work, but the latter parameter cannot be globbed as there no such files when this command begins to execute. Generally speaking this can get pretty messy if commands try to be clever about wildcard expansion against theoretical files.

stroupwaffle

9 days ago

If wchar_t holds the majority of code points for given use, then there are some benefits to having a fixed-width character and certain algorithms.

But it is fairly easy to convert wchar_t to-and-from UTF8 depending on use.

UTF16 is not awful it is the same as an 8-bit character set but twice longer.

andrewla

9 days ago

UTF-16 is fine so long as you are in Plane 0. Once you have to deal with surrogate pairs, then it really is awful. Once you have to deal with byte-order-markers you might as well just throw in the towel.

UTF-8 is well-designed and has a consistent mechanism for expanding to the underlying code point; it is easy to resynchronize and for ASCII systems (like most protocols) the parsing can be dead simple.

Dealing with Unicode text and glyph handling is always going to be painful because this problem is intrinsically difficult. But expansion of byte strings to unicode code points should not be as difficult as UTF-16 makes it.

Windows was converted to UCS-2 before higher code planes were designed and they never recovered.

Const-me

9 days ago

I would add that on modern WinNT, Direct3D is an essential part of the kernel, see dxgkrnl.sys. This means D3D11 is guaranteed to be available. This is true even without any GPU, Windows comes with a software fallback called WARP.

This allows user-space processes to easily manipulate GPU resources, share them between processes if they want, and powers higher level technologies like Direct2D and Media Foundation.

Linux doesn’t have a good equivalent for these. Technically Linux has dma-buf subsystem which allows to share stuff between processes. Unfortunately, that thing is much harder to use than D3D, and very specific to particular drivers who export these buffers.

movedx

9 days ago

But why should Linux have an equivalent of such features?

Const-me

8 days ago

Because modern world is full of computers with slow mobile CPUs and high-resolution high refresh rate displays. On such computers you need a 3D GPU to render anything, even a scrolling text, at the refresh rate of the display.

A 3D GPU is a shared hardware resource just like a disk. GPU resources are well-shaped slices of VRAM which can store different stuff, just like files are backed by the blocks of the underlying physical disks. User-space processes need to create and manipulate GPU resources and pass them across processes, just like they do with files on disks.

An OS kernel needs to manage 3D GPU resources for the same reason they include disk drivers and file system drivers, and expose consistent usermode APIs to manipulate files.

It seems Linux kernel designers mostly ignored 3D GPUs. The OS does not generally have a GPU API: some systems have OpenGL, some have OpenGL ES, some have Vulkan, some have none of them.

AnimalMuppet

8 days ago

> Because modern world is full of computers with slow mobile CPUs and high-resolution high refresh rate displays.

And Linux does run on such computers. But it also runs on mainframes, and on embedded systems with no graphics whatsoever. And it runs on a much wider variety of CPUs than Windows does.

So for Linux, it's much more of a stretch to assume that the device looks something like a PC. And if it's not going to be there in a wide variety of situations, then should Linux really have 3D graphics as part of the kernel? (At a minimum it should be removable, and have everything run fine without it.)

simonask

8 days ago

Isn't that a somewhat myopic view of what Linux should be, especially considering that its by far largest install base is billions of Android devices?

Linux isn't always running on devices that have a direct human interface, but when it does, it is arguably the most important function.

marcosdumay

8 days ago

Or a modern OS does what Linux does, expose DMI and let a userspace driver manage the GPU.

lelandbatey

9 days ago

Why should a kernel have anything? Because it's useful and convenient, as the OP mentioned.

movedx

9 days ago

It _can be_ useful. It can also _not_ be useful to others. It sounds like it's not a choice in this case, but a forced feature, and that's fine for some and not for others.

So again, why _must_ Linux have an equivalent?

scoodah

8 days ago

No one used the word must until you right now. The OP comment was posting a valid thing that Windows has that Linux does not. It’s fine if Linux doesn’t have it but I don’t understand where you’re coming from as presenting this as though someone said Linux must have this.

movedx

8 days ago

> Linux doesn’t have a good equivalent for these.

That implies Linux must or should have an equivalent to those features found in Windows -- you can choose any word you like, friend. There is no other reason to make that statement but to challenge the fact Linux doesn't have those options.

Fun fact: I switched to Kubuntu recently and I didn't even have to install a graphics driver. It was just there, just worked, and my AMD 7700 XTX is working fine and playing Windows "only" games via Proton just fine as well as Linux native games just fine.

I'm simply trying to get people to think about design choices and questioning or stating why one thing is better than another.

alt227

8 days ago

Dont read into the text too much, this doesnt imply what you are saying at all.

The reason to make that statement is to point out that there are differences in functionality.

Nobody in the thread said one situation was better than the other, until you did.

lelandbatey

8 days ago

> Linux doesn’t have a good equivalent for these.

That literally does not imply a need for those features. It points out a thing that Linux lacks, which is true. And that's where it stops. You are projecting an implication that "Linux does need x, y, or z because Windows has X, Y, or Z."

We're not sitting in a thread talking about what makes Linux/Windows better than the other, we're in a thread talking about just factual differences between the two. You can talk about two things, compare them, and even state your own preference for one or the other without stating that each should do everything that the other can do.

E.g. snowmobiles are easier to handle while driving in the snow than a Boeing 737. I like driving my snowmobile in the snow more than I like taxiing a Boeing 737 in the snow.

We can talk about things without implying changes that need to happen.

Dalewyn

8 days ago

This line of thought is precisely why Linux continues to falter in mainstream acceptance.

Windows exists to enable the user to do whatever he wants. If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

This is far better than Linux's (neckbeards'?) philosophy of Thou Shalt Not Divert From The One True Path which will inevitably inconvenience many people and lead to, you guessed it, failure in the mainstream market.

Contrast Android, which took Linux and re-packaged it in a more Windows-like way so people could actually use the bloody thing.

agumonkey

8 days ago

Not to contradict but it seems to me that *nixes have always split user interaction and 'compute'. To them running a headless toaster is probably more valuable than a desktop UI.

SoothingSorbet

8 days ago

> Windows exists to enable the user to do whatever he wants

It's very bad at that, then, considering it insists on getting in my way any time I want to do something (_especially_ something off of the beaten path).

> If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

I don't see what the point you are trying to make is, this is no different on Linux. What does D3D being in the kernel have to do with _anything_? You can have a software rasterizer on Linux too. You can play games and watch videos. Your message is incoherent.

Dalewyn

8 days ago

>I don't see what the point you are trying to make is

Parent commenter said Linux shouldn't have <X> if it's not useful for everyone, though more likely he means for himself. Either way, he is arguing Linux shouldn't have a feature for dogmatic reasons. Violating the Unix ethos of doing only one thing, or something.

Meanwhile, Windows (and Android) have features so people can actually get some bloody work done rather than proselytize about their glorious beardcode.

movedx

8 days ago

> Parent commenter said Linux shouldn't have <X>

I said no such thing! You've completely missed the point.

Dalewyn

8 days ago

You said "why must Linux have" a feature that can be useful to some and not useful to others. Taking that to its strongest conclusion[1], you're saying Linux shouldn't have something if it's not useful to "everyone" and asking for counter arguments; this is not unlike the "Do one thing and do it well." Unix ethos.

Clearly, as demonstrated by history, most people prefer that their computers can and will do the many things they need or want with minimal finagling. That is what having DirectX inside Windows means, and why Linux which makes that a finagling option at best and flat out refuses as heresy at worst flounders.

[1]: https://news.ycombinator.com/newsguidelines.html

movedx

7 days ago

> ... you're saying Linux shouldn't have something...

I said no such thing. You're taking a question and converting it into a statement in your own head.

Why must any operating system be designed with a 3D rendering engine compiled into it? It's just a question. I'm trying to learn. I've never once said it should or should not have the thing, I'm asking why would it need it? Why should it have an equivalent to Windows' implementation of such a thing? What do I gain? Is that always a good design choice? Is that true of Windows Server, and if so, why do I need 3D rendering baked into my Windows Server? What about Windows Server Core... does the NT kernel have it baked in there?

Dalewyn

7 days ago

Here's what you said again for reference:

>It _can be_ useful. It can also _not_ be useful to others. It sounds like it's not a choice in this case, but a forced feature, and that's fine for some and not for others.

>So again, why _must_ Linux have an equivalent?

That is very different from simply asking why Linux should have a "Direct3D" built in like Windows does Direct3D.

>What do I gain?

To answer this again and more in-depth this time: A central, powerful subsystem that can be assumed to exist. We can assume Direct3D is and always will be available in Windows.

One of Linux's biggest problems is you can't safely assume anything will exist, in particular cases not even the kernel. This is the reason containers were invented, because you need to bring your own entire operating environment on account of being impossible to assume anything. The cost for this workaround is performance and complexity, the latter of which most users abhor.

>Is that always a good design choice?

Yes, it enables users thereof.

> Is that true of Windows Server, and if so, why do I need 3D rendering baked into my Windows Server? What about Windows Server Core... does the NT kernel have it baked in there?

If the server is a media server, say, having DirectX means the server can do encoding and decoding itself and that's something many people want.

Windows itself also needs Direct3D for rendering the desktop, which Server also obviously has.

There is next to no practical cost for the user.

severino

8 days ago

I don't get it. You mean people can't watch videos or play games in Linux?

movedx

7 days ago

I'm using Linux right now, and sadly I only have access to an 80x30 black and white terminal. I'm writing this comment as a raw HTTP request to this site. Send help. I just need colour and at least 1024x968... please help! I wish I could watch videos!

movedx

8 days ago

> If the user wants to play a game or watch a video, Direct3D is there to let him do that. If he doesn't, Direct3D doesn't get in the way.

I _just_ moved from Windows 11 to Kubuntu. None of that stuff is missing. In fact, unlike Windows 10/11, I didn't even have to install a graphics driver. My AMD 7700 XTX literally just worked right out of the box. Instantly. Ironically that’s not the case for Windows 10/11. This isn’t a “My OS is better than your OS” debate — we’re talking about why D3D being integrated into the kernel is a good idea. I’m playing devil’s advocate.

And thus, you missed my point: "Why should Linux have an equivalent to Direct3D" isn't me arguing that Windows having it is bad, it's me asking people to think about design choices and consider whether they're good or bad.

> This is far better than Linux's (neckbeards'?) philosophy of Thou Shalt Not Divert From The One True Path which will inevitably inconvenience many people and lead to, you guessed it, failure in the mainstream market.

If you think Windows having Direct3D "built in" is why it has mainstream dominance, then you potentially have a very narrow view of history, market timing, economics, politics, and a whole range of other topics that actually led to the dominance of Windows.

Hikikomori

8 days ago

>I _just_ moved from Windows 11 to Kubuntu. None of that stuff is missing. In fact, unlike Windows 10/11, I didn't even have to install a graphics driver. My AMD 7700 XTX literally just worked right out of the box. Instantly. Ironically that’s not the case for Windows 10/11.

How did you install a driver on windows if your gpu didn't work out of the box?

SoothingSorbet

8 days ago

I'm sure Windows is perfectly capable of driving a GOP framebuffer. That doesn't mean the kernel has an actual GPU driver.

Hikikomori

8 days ago

It will also install a proper driver with windows update, can also do that during the installation.

movedx

8 days ago

No. That's not true. It does not do that. I've reinstalled Windows 11 several times to resolve issues or try these kinds of things out. It has never offered to download an AMD driver for me. This is false.

lproven

7 days ago

> No. That's not true. It does not do that.

Windows 10 can 100% download and install an nVidia proprietary driver for hardware it finds.

Indeed I inadvertently trapped it in a boot loop by fitting 2 dissimilar nVidia Fire cards with different GPU generations. This works on Linux if you use Nouveau but not with nVidia drivers.

Win10 lacks an equivalent of nouveau. It booted, detected card #1, downloaded and installed a driver, rebooted, the card came up; then it detected card #2, which wasn't working, downloaded and installed a driver, and rebooted.

Now card #2 initialised but #1 didn't work. You can only have 1 nVidia driver installed at a time.

So, Windows downloads and installs the driver for card #1... reboots... #1 works, #2 doesn't... download driver, install, reboot...

The only way to interrupt this is to power off and remove one card.

When I replaced both cards with a single AMD card, it downloaded the driver and everything worked.

You are wrong. Source: my own personal direct experience.

Hikikomori

7 days ago

Windows update doesn't install a proper driver?

movedx

7 days ago

Does it download an official AMD driver? No. It hasn’t ever done for me across hundreds of Windows installs across hundreds of devices.

Dalewyn

7 days ago

Windows Update can and will grab most third-party drivers for your hardware if you let it, this includes video card drivers from Intel, Nvidia, and AMD.

Hikikomori

7 days ago

Yes it does. I even need to disable my iGPU that my Ryzen has so it does keep downloading the driver.

movedx

8 days ago

Now you're just being childish and lazy :)

user

8 days ago

[deleted]

nyrikki

9 days ago

There are a number of issues, like ignoring the role of VMS, that windows 3.1 had a registry, the performance problems of early NT that lead to the hybrid model, the hype of microkernels at the time, the influence of plan 9 on both etc...

Cutler knew about microkernels from his work at Digital, OS/2 was a hybrid kernel, and NT was really a rewrite after that failed partnership.

The directory support was targeting Netware etc...

nullindividual

9 days ago

And don't forget Alternate Data Streams for NTFS! Made specifically for Mac OS.

steve1977

9 days ago

Also Richard Rashid - the project lead for Mach at CMU - joined Microsoft in 1991.

Which is kinda interesting - Rashid went to Microsoft, Avie Tevanian went to NeXT/Apple.

__d

9 days ago

Rashid went to Microsoft _Research_, which is quite different.

netbsdusers

9 days ago

What exactly was "hybrid" about the OS/2 kernel? "Hybrid" has always been basically a made up concept, but in OS/2 it seems especially bizarre to apply it to what's obviously a monolithic kernel, even one that bears a lot of similarity with older unix.

nyrikki

9 days ago

Real systems rarely can pass idealistic academic ideals.

Balancing benefits and costs of microkernel and monolithic kernels is common.

It looks like Google SEO gaming by removal of old content is making it hard to find good sources, but look at how OS/2 used ring 2 if you want to know.

Message passing and context switching between kernel and user mode is expensive, and if you ever used NT 3.51 that was clearly visible as were the BSoDs when MS shifted to more of a 'hybrid' model.

AshamedCaptain

9 days ago

You can even call Windows/386 or 3.x "hybrid", and in my opinion it would be more accurate to call Windows/386 a hybrid kernel than calling NT one. There's a microkernel that manages VMs, and there is a traditional, larger kernel inside each VM (either Windows itself, or DOS). The microkernel also arbitrates hardware between each of the VMs, but it is the VMs themselves that contain most of the drivers, which are running in "user space"!

In comparison Windows NT is basically a monolithic kernel. Everything runs in the same address space, so there's 0 protection. Or at least, in any definition where you call NT a hybrid kernel then practically any modular kernel would be hybrid. In later versions of NT the separations between kernel-mode components this post is praising have almost completely disappeared and even the GUI is running in kernel mode...

immibis

9 days ago

AFAIK, Windows 3.1's registry was only to store COM class information. It was just another type of single-purpose configuration file.

phibz

8 days ago

The article hit on some great high points of difference. But I feel like it misses Cutler and team's history with OpenVMS and MICA. The hallmarks of their design are all over NT. With that context it reads less like NT was avoiding UNIX's mistakes and more like it was built on years of learning from the various DEC offerings.

bawolff

9 days ago

> Internationalization: Microsoft, being the large company that was already shipping Windows 3.x across the world, understood that localization was important and made NT support such feature from the very beginning. Contrast this to Unix where UTF support didn’t start to show up until the late 1990s

I feel like this is a point for unix. Unix being late to the unicode party means utf-8 was adopted where windows was saddled with utf-16

---

The NT kernel does seem to have some elegance. Its too bad it is not open source; windows with a different userspace and desktop environment would be interesting.

chungy

9 days ago

Windows would be so much better if it were actually UTF-16. It's worse than that: it's from a world where Unicode thought "16-bits ought to be enough for anybody" and Windows NT baked that assumption deep into the system; it wasn't until 1996 that Unicode had to course-correct, and UTF-16 was carved out to be mostly compatible with the older standard (now known as UCS-2). For as long as you don't use the surrogate sequences in strings, you happen to be UTF-16 compatible; if you use the sequences appropriately, you happen to be UTF-16 compatible; if you use them in invalid ways to UTF-16, now you've got a mess that's a valid name on the operating system.

I can't really blame NT for this, it's unfortunate timing and it remains for backwards compatibility purposes. Java and JavaScript suffer similar issues for similar reasons.

chungy

9 days ago

I'll throw this out there too: UTF-8 isn't necessarily better than UTF-16; they both support the entirety of the Unicode character space.

UTF-8 is convenient on Unix systems since it fits into 8-bit character slots that were already in place; file systems have traditionally only forbidden the NULL byte and forward-slash, and all other characters are valid. From this fact, you can use UTF-8 in file names with ease on legacy systems, you don't need any operating system support for it.

UTF-8 is "space optimized" for ASCII text, while most extra-ASCII Latin, Cyrillic, Greek, Arabic characters need two bytes each (same as UTF-16); most of Chinese/Japanese/Korean script in the BMP requires three bytes in UTF-8, whereas you still only need two bytes in UTF-16. To go further beyond, all SMP characters (eg, most emoji) require four bytes each in both systems.

Essentially, UTF-8 is good space-wise for mostly-ASCII text. It remains on-par with UTF-16 for most western languages, and only becomes more inefficient than UTF-16 for east-Asian languages (in such regions, UTF-16 is already dominant).

bawolff

9 days ago

Space savings are irrelavent. Text is small, and after gzip its going to be the same anyways.

Seriously, when was the last time where you both cared about saving a single kb, but compression was not an option. I'm pretty sure the answer is never. It was never in the 90s, it is extra never now that we have hard drives hundreds of gb big.

UTF-8 is better because bytes are a natural unit. You dont have to worry about surrogates. You dont have to worry about de-synchronization issues. You dont have to worry about byte order.

Backwards compatibility with ascii and basically every interface ever, also helps. (Well not 7bit smtp..). The fact is, ascii is everywhere. Being able to treat it as just a subset of utf-8 makes a lot of things easier.

> (eg, most emoji) require four bytes each in both systems.

This is misleading because most emoiji are not just a single astral character but a combination of many.

nmz

8 days ago

Saving disk space and synchronization is only important for network transmission. At the local level, you will need to convert to something where you can get/know positioning, utf8 does not allow for this given its variability, this means a lot of operations are more expensive and you will have to convert to utf16 anyway.

mmoskal

8 days ago

Interestingly in a Javascript or similar runtime most of text that hits the caches where the size actually matters is still ASCII even in far east because of identifiers. Utf8 for the win!

rbanffy

9 days ago

I don’t think the registry is a good idea. I don’t mind every program having its own dialect of a configuration language, all under the /etc tree. If you think about it, the /etc tree is just a hierarchical key-value store implemented as files where the persistent format of the leaf node is left to its implementer to decide.

aseipp

9 days ago

> If you think about it, the /etc tree is just a hierarchical key-value store

Well, you're in luck, I have good news for you -- Windows also has its own version of this concept: it's called "The Registry". You might have heard of it?

rkagerer

8 days ago

The registry would have been better if there were a stronger concept of "ownership" of the data it contains, tying each key to the responsible app / subsystem. I've tracked hundreds of software uninstalls and I would bet only about 1% of them actually remove all the cruft they originally stick in (or populated during use). The result is bloat, a larger surface area for corruption, and system slowdown.

Ironically in this respect it was a step backward... When settings lived in INI files, convention typically kept them in the same place as the program, so they were easy to find and were naturally extinguished when you deleted the software.

If you look at more modern OS's like Android and iOS they tend to enforce more explicit ties between apps and their data.

alt227

8 days ago

> system slowdown

This is often touted as a downside for the registry, and indeed a whole ecosystem of apps have evolved around this concept to 'clean' the registry and 'speed it up'.

In my experience of 35 years of using windows, I have never noticed a bloated registry slowing down a computer. I have also never noticed a speed up of the system by removing some unused keys. The whole point of addresses and key pairs is that individual bits of data can be written or read without loading the whole hive.

I wonder where this idea of a bloated slow registry came from?

rbanffy

8 days ago

Since the registry is a database, I would expect adding and removing branches and leaves would create fragmentation that, in the age of spinning metal and memory pressure, could create performance issues. A file system is easily defragmenters with tools available in the operating system itself, but not the registry. I’m not even sure how much of it can be optimised (by doing garbage collection and defragmenting the underlying files) with the computer running.

If it makes use of indexes, changes will lead to the indexes themselves being fragmented, making performance even worse.

nullindividual

8 days ago

The registry was capable of being compacted, negating the need to defragment it. This was done via the standard Windows Backup utility provided OOTB.

As for performance, the registry was mapped in paged pool memory[0]; only sections in-use needed to be mapped. Other hives were volatile and never persisted to disk. When data is added to the registry, the paged pool expands to accommodate. Maximum registry size is based off of installed memory, up to a limit.

Registry subkeys are organized alphabetically in an internal list; searches are binary searches rather than using an index. Searches begin in the middle of the list and go up or down based upon the alphabetical value being searched for (so start at 50% -> up/down, split remaining list 50%, up/down -> repeat until found).

You can find more info in Chapter 4 of Windows Internals 4th Edition.

Needless to say, none of the concerns you presented were valid back in the dark days.

[0] https://learn.microsoft.com/en-us/windows/win32/sysinfo/regi...

rkagerer

8 days ago

Anecdotally I've experienced several PC's that became slow/unstable/unusable after a number of years. I can't scientifically prove it was due to the registry (other than a couple that had specific corruption).

But after I started using Total Uninstall religiously, from day 1 of a PC's life, my desktops have lasted indefinitely - going on 15 years for the latest one (yes, really). Hardware was of course upgraded along the way, making old driver removal paramount (which TU is very helpful with).

Analyzing it's logs after a software installation has also been helpful to spot and surgically remove unwanted keys like autostarts, Explorer addins, etc.

nullindividual

8 days ago

This is the responsibility of the installer.

Using Windows Installer, this is easily accomplished. The Msi database _does_ track individual files and registry entries. If you're using another installer, or the developer allows their app to write something somewhere that isn't tracked by their installer, you're going to get files left behind.

macOS is especially bad in this respect. Bundles are great, until you have a ~/Library full of files that you never knew about after running an application.

rbanffy

8 days ago

> This is the responsibility of the installer

On any Unix I can grep my way into the /etc tree and find files belonging to uninstalled applications and get rid of them myself. The whole point is that I can manage the “configuration database” with the same tools I manage a filesystem. That if the brilliant tools like apt and dnf fail to clean up after a program is uninstalled.

nullindividual

8 days ago

Windows is no different. Next time you're in front of Terminal, try:

    cd HKCU:\SOFTWARE
You're now browsing the registry and can use terminal commands.

rbanffy

8 days ago

When was this introduced? This is surprisingly enlightened.

Can I edit keys with a text editor?

nullindividual

8 days ago

It was introduced with Windows PowerShell 1.0[0]. A text editor would need to directly support managing the registry, but you can read/write/search/do terminally-stuff to the registry via PowerShell.

The term we're looking for is a PSProvider of which there are many. There's even a PSProvider for SharePoint Online[2].

[0] https://devblogs.microsoft.com/scripting/use-the-powershell-...

[1] https://learn.microsoft.com/en-us/powershell/module/microsof...

[2] https://github.com/pnp/powershell/blob/96d00aa60379d8e3310f7...

amaccuish

8 days ago

Thank you for your reasoned response.

I still like the idea that I think you originally had, whereby apps could only write to their own specific area, thus containing all their configuration. I think that would solve 99% of all complaints about the registry.

Right now, they can write to anywhere your user has access to.

nullindividual

8 days ago

What happens if you have an extensible app, say Microsoft Office, which enumerated it's own subkeys to discover 3rd party plugins?

What if an app provides COM services and needs to write that to a centralized location that is enumerated to discover available COM services?

What if your app happens to be a god-awful HP Print Center app with it's own print drivers and a Windows Service, where it needs to write to a central location that is enumerated for device drivers and Windows Services?

amaccuish

8 days ago

> What happens if you have an extensible app, say Microsoft Office, which enumerated it's own subkeys to discover 3rd party plugins?

Then you have Microsoft Office/Plug-Ins/plugin-guid, whereby Plug-Ins is user writable

> What if an app provides COM services and needs to write that to a centralized location that is enumerated to discover available COM services?

If it is providing services it can write, if it enumerates it can read. You can also have multiple levels, like HKLM (machine) and HKLU (the user).

> What if your app happens to be a god-awful HP Print Center app with it's own print drivers and a Windows Service, where it needs to write to a central location that is enumerated for device drivers and Windows Services?

May God help you. No, but you would have, again like above, a central location you could write to. Up to the Admin if Joe can write to this location or only privileged processes.

It's just, a lot of this was not enforced, only partially (from the windows side when it looks for things), so for everyone else, it's a free for all.

The biggest disadvantage on the Linux side is that something like Group Policy is ridiculously difficult, because every app has it's own location and DSL, and sometimes you have one, central, config file, and sometimes the app is considerate with something like override.d.

rkagerer

8 days ago

1) Office should "own" those keys, and provide a UI to manage its addins. When Office is removed, so is that chunk of registry.

2) COM discovery wasn't particularly well designed in the first place, IMO. It's a perfect example of where the keys should be explicitly tied to the owner (ie. the COM provider) so they are removed when the component is uninstalled. So many programs leave reams of COM-related entries behind, this is table stakes for all those free (and IMO not very useful) registry cleaners.

In both your second and third examples, the OS could either:

a) Provide specific API's (printing is a fairly common service where it makes sense to have a lot of shared functionality hosted in the OS).

b) Designate well-known locations where apps can share access to keys. This is loosely the case today, but I argue the OS could do more to make them explicit and maintain ownership ties so the relevant ones are automatically removed when appropriate (I think Windows Store moved in that direction??).

c) Require one app to own and gatekeep the centralized information, and provide simple primitives that allow the other apps to interact with it to register/unregister themselves. The expectation is the owning app actually manages said information (hopefully providing some sort of UI) but at least when it's removed so will all the contained info.

The important thing is that ownership policies are maintained so when a driver / COM service / etc. are removed their cruft goes away along with them. I recognize there are edge cases but I'm not convinced they can't be solved in a generalized, well-thought-out fashion.

Personally I don't feel an app's storage needs to be completely isolated as is done in Android (a security/extensibility tradeoff).

A lot of this housekeeping comes down to an OS maker providing out-of-the-box tooling to developers, along with sensible defaults, that make following good habits natural and friction-free.

Microsoft in particular provided tools and documentation since the early days, but relied too much on developers to follow them and didn't do enough to make it "just work" for the lazy ones.

Then over the years they changed their minds several times along the way, so convention became a sometimes-conflicting spaghetti mess.

ruthmarx

8 days ago

> only about 1% of them actually remove all the cruft they originally stick in (or populated during use). The result is bloat, a larger surface area for corruption, and system slowdown.

I think this is a myth partly spread by commercial offerings that want to 'clean and optimize' a windows install.

Most of the cruft left in the registry is the equivalent of config files in /etc not removed after uninstalling an app. That stuff isn't affecting performance.

vkazanov

8 days ago

15 something years ago I had this unpleasant job where I had to install a major vendor's database on Windows server machines. I remember I also had a lengthy list of things to check and clean in the registry to make sure things work.

Yes, these are configs. And no, we cannot just let applications do whatever they want in the shared config space without a way to trace things back to the original app.

At least in the Linux world I was able to just check what the distro scripts installed.

ruthmarx

8 days ago

> And no, we cannot just let applications do whatever they want in the shared config space without a way to trace things back to the original app.

We don't have to let them, but we do for the most part. We could use sandboxing technology to isolate and/or log, but mostly OSs don't do anything to restrict what an executable can do by default, at least as far as installing.

> At least in the Linux world I was able to just check what the distro scripts installed.

You can do this in Windows too sometimes, but it doesn't matter if it's a badly behaving app. There are linux installers that are just binary blobs and it would be a lot more work to monitor what they do also.

whoknowsidont

8 days ago

>but mostly OSs don't do anything to restrict what an executable can do by default, at least as far as installing.

There is a very mature and very powerful system for this called Jails.

>There are linux installers that are just binary blobs and it would be a lot more work to monitor what they do also.

This is simply not true. If I want to monitor an app in it's entirety I can easily do so on most unixy systems.

Past the default tools that require some amount of systems knowledge to use correctly, you can easily just use Stow or Checkinstall (works on most linux systems).

There is no mechanism for doing this on Windows as even the OS loses track of it sometimes. And if you think I'm being dramatic, trust me, I am not. There is a reason the tools don't exist for Windows, at least meeting feature parity.

ruthmarx

8 days ago

> There is a very mature and very powerful system for this called Jails.

No, jails aren't really the solution to the issue I'm talking about.

It's 'a' solution, but not the ideal solution.

> This is simply not true. If I want to monitor an app in it's entirety I can easily do so on most unixy systems.

It is true, but I think you're missing my point. If I wanted to monitor any app on Windows I can do the same, I just need procmon from sysinternals.

> There is no mechanism for doing this on Windows as even the OS loses track of it sometimes.

There is, in fact there are numerous solutions.

The point was simply that there can be hostile installers that you require tools to see what they are doing on both Linux an Windows. Linux isn't special in any way in this regard.

whoknowsidont

7 days ago

>It is true, but I think you're missing my point.

Maybe? What are you envisioning? Some sort of static analysis before a program runs? Explicit opt-in's to what a program needs, from the program itself (and only what it needs)?

ruthmarx

7 days ago

I'm just talking about respect for convention. The same way on an FHS respecting distro a software should install to FHS paths and not, for example, make a new root level directory. There's a respect for user preferences there.

It's not about using technical means to restrict software, but about the OS providing certain mechanisms and there being an expectation for trusted software that it will respect those conventions.

That's why I don't consider a jail a solution. It's an extra step the user has to carry out, and I don't think the burden should be on the user if it doesn't have to be. While in one sense it's good security practice to treat every program as malware, most users are not going to do that nor should they have to.

A tool like Sandboxie on Windows solves that problem in one sense, but not the actual root of the problem, which is it being more acceptable than it should be to go against user preferences and convention.

rkagerer

8 days ago

In my case the 1% is an estimate after manually inspecting hundreds of TU uninstall logs over the course of more than a decade.

I actually used to reach to the worst offenders with details on what their installers missed, sometimes they fixed it but most often they didn't care.

magicalhippo

9 days ago

And since it supports variable-lenght binary values, it fully supports that "the persistent format of the leaf node is left to its implementer to decide".

rbanffy

8 days ago

Except that now it’s an opaque blob and not a text file I can use grep to find and vi to edit.

rbanffy

8 days ago

What exactly do I gain from the registry that compensates for the fact I can use any tool that works on files to manage the /etc tree?

Can you manage registry folders with Git and keep track of the changes you make? Can you grep for content? On a Mac it’s indexed by the systemwide text search tooling.

Using the file system to store that data is extremely powerful.

craigmoliver

8 days ago

You can create/export .reg files to your GIT repo/file system...but that may be one extra step and doesn't stay in sync automatically.

rbanffy

8 days ago

Another idea would be to do periodic snapshots of the /etc folder. That, sadly, excludes ext4, but any flavour of Solaris can easily do it.

phendrenad2

9 days ago

So what's wrong with the registry if a file-based registry is okay? The registry could be an abstraction layer over a Unix /etc filesystem, after all.

ruthmarx

8 days ago

I have no issue with the concept, but in practice the Windows registry is a lot more obfuscated than it needs to be. There can be trees and trees of UUIDs or similar, and there is no need for it to be so user unfriendly.

Part of this might be mixing internal stuff that people never need to see with stuff that people will need to access.

nullindividual

8 days ago

That's a developer choice, not the registry in and of itself. You could just as easily have /etc filled files with GUIDs for file names.

Generally, 3rd party developers don't use a bunch of GUIDs for keys. Microsoft does for Windows components to map the internal object ID of whatever they're dealing with; my assumption is for ease of recognition/documentation on their side (and generally the assumption that the end user shouldn't be playing around in there).

ruthmarx

8 days ago

> That's a developer choice, not the registry in and of itself. You could just as easily have /etc filled files with GUIDs for file names.

For sure, that's why I said I have no issue with the concept but rather how it's used in practice.

> Microsoft does for Windows components to map the internal object ID of whatever they're dealing with; my assumption is for ease of recognition/documentation on their side (and generally the assumption that the end user shouldn't be playing around in there).

That's maybe fair, but most of that stuff isn't stuff the user even needs to access most of the time. Maybe separating it out from all the HKLM and HKCU software trees would have made sense.

nullindividual

8 days ago

HKLM and HKCU have specific ACLs. It wouldn't make sense to have _two_ user-modifiable registry hives and ditto for machine.

ruthmarx

8 days ago

I don't really understand your point here. It would make perfect sense to have two separate hives by sectioning off all the stuff users, even power users actually need access to. 'specific ACLs' have no bearing on that.

nullindividual

7 days ago

That already exists! It's the HKCU.

ruthmarx

7 days ago

I really think you've missed my point and I don't understand how.

amaccuish

8 days ago

Everyone says, oh /etc is fine, but no registry. But let's be honest, on a user workstation, /etc is only one of many places where config can be found.

nmz

8 days ago

Labeling, fstab is supposed to be a .tsv, what happens if I mistype something? where's the safety? Does this need to be a DSL?

JackSlateur

8 days ago

"rm -rf /etc/nginx" versus "try to remember where are the miriads of random keys spread everywhere"

alt227

8 days ago

Then it seems like the issue you have is how vendors are storing keys in the registry, not the registry itself. In your example, if nginx made a single node in the registry with all its keys under that then it would be just as easy to remove that single node as it would be to remove the single directory.

However in the real world it is never as simple as you suggest. Linux apps often litter the filesystem with data. an app might have files in /etc and /opt with shortcut scripts in /root/bin and /usr/sbin. Config files in /home and /usr directories. Linux file systems are just as littered as windows registries in my experience, if not worse because they differ in different distros.

mixmastamyk

9 days ago

I like what the elektra project was trying to do, but it didn't catch on. Basically put config into the filesystem with a standard schema, etc. Could use basic tools/permissions on it, rsync it, etc. Benefits of the registry but less prone to failure and no tools needed to be reinvented.

delta_p_delta_x

9 days ago

I can't quite decide if this comment is sarcasm or not.

ThinkBeat

9 days ago

Architecturally WindowsNT was a much better designed system than Linux when it came out.

I wish it had branched to one platform as a workstation OS (NTNext).

and one that gets dumber and worse in order to make it work well for gaming. Windows 7/8/10/11 etc,

Technically one would hope that NTNext would be Windows Server, but sadly no.

I remember installing WindowsNT on my PC in awe how much better it was than Dos/Windows3 and later 95.

And compatibility back then was not great. There was a lot that didn't work, and I was more than fine with that.

It could run win32, os/2, and posix and it could be extended to run other systems in the same way.

Posix was added as a nessecity to bid for huge software contracts from the US government, and MS lost a huge contract and lost interest in the POSIX sub system, and in the os/2 sub system.

Did away with it, until they re-invented a worse system for WSL.

Note subsystem in WindowsNT means something very different than subsystem for Linux.

kev009

8 days ago

Linux is one of the prime examples of the Worse is Better motif in UNIX history. And to all your points I don't think it's outgrown this, it is not a particularly elegant kernel even by UNIX standards.

Linus and other early Linux people had a real good nose for performance.. not the gamesmanship kind just stacking often minuscule individual wins and avoiding too many layered architectural penalties. When it came time to worry about scalability Linux got lucky with the IBM investment who had also just purchased Sequent and knew a thing or two about SMP and NUMA.

Most commercial systems are antithetical to performance (there are some occasional exceptions). The infamous interaction of a Sun performance engineer trying to ridicule David Miller who was showing real world performance gains..

I think that keen performance really helped with adoption. Since the early days you might install Linux on hardware that was rescued from the garbage and do meaningful things with it whereas the contemporary commercial systems had forced it obsolete.

kbolino

9 days ago

WSL was a real subsystem. It worked in similar ways to the old subsystems. However, the Linux kernel is different enough from Windows, and evolves so much faster, that Microsoft wasn't able to keep up. WSL2 is mostly just fancy virtualization but this approach has better compatibility and, thanks to modern hardware features, better performance, than WSL ever did.

Const-me

9 days ago

> thanks to modern hardware, very little performance penalty

I would not call 1 order of magnitude performance penalty for accessing a local file system “little”: https://github.com/microsoft/WSL/issues/4197

kbolino

9 days ago

Yeah, that's pretty bad. The fact that mounting the volume as a network share gets better performance is surprising and somewhat concerning.

However, what I was talking about performance-wise was the overhead of every system call. That overhead is gone under WSL2. Maybe it wasn't worth it for that reason alone, but original WSL could never keep up with Linux kernel and ecosystem development.

Being able to run nearly all Linux programs with only some operations being slow is probably still better than being able to run only some Linux programs with all operations being slow.

wvenable

8 days ago

The problem with WSL1 was the very different file system semantics between Windows and Linux. On Linux files are dumb and cheap. On Windows files are smarter and more expensive. Mapping Linux file system calls on Windows worked fine but you couldn't avoid paying for that difference when it came up.

You can't resolve that issue while mapping everything to Windows system calls. If you're not going to map to Windows system calls then you might as well virtualize the whole thing.

sebstefan

8 days ago

>The C language: One thing Unix systems like FreeBSD and NetBSD have fantasized about for a while is coming up with their own dialect of C to implement the kernel in a safer manner. This has never gone anywhere except, maybe, for Linux relying on GCC-only extensions. Microsoft, on the other hand, had the privilege of owning a C compiler, so they did do this with NT, which is written in Microsoft C. As an example, NT relies on Structured Exception Handling (SEH), a feature that adds try/except clauses to handle software and hardware exceptions. I wouldn’t say this is a big plus, but it’s indeed a difference.

Welp, that's an unfortunate use of that capability given what we see today in language development when it comes to secondary control flows.

nmz

8 days ago

> Linux’s io_uring is a relatively recent addition that improves asynchronous I/O, but it has been a significant source of security vulnerabilities and is not in widespread use.

Funny, opening the manpage of aio on freebsd you get this on the second paragraph

> Asynchronous I/O operations on some file descriptor types may block an > AIO daemon indefinitely resulting in process and/or system hangs. > Operations on these file descriptor types are considered “unsafe” and > disabled by default. They can be enabled by setting the > vfs.aio.enable_unsafe sysctl node to a non-zero value.

So nothing is safe.

CoolCold

7 days ago

those are unsafe from different perspectives - AFAIR, io_uring just passed some/many filtering mechanics applied to other subsystems in kernel when say reading/writing files, so security frameworks could not be enforced. Something about auditd subsystem as well.

jart

8 days ago

> Unified event handling: All object types have a signaled state, whose semantics are specific to each object type. For example, a process object enters the signaled state when the process exits, and a file handle object enters the signaled state when an I/O request completes. This makes it trivial to write event-driven code (ehem, async code) in userspace, as a single wait-style system call can await for a group of objects to change their state—no matter what type they are. Try to wait for I/O and process completion on a Unix system at once; it’s painful.

Hahaha. Try polling stdin, a pipe, an ipv4 socket, and an ipv6 socket at the same time.

rkagerer

8 days ago

Can we talk about NT picoprocesses?

Up until this feature was added, processes in NT were quite heavyweight: new processes would get a bunch of the NT runtime libraries mapped in their address space at startup time. In a picoprocess, the process has minimal ties to the Windows architecture, and this is used to implement Linux-compatible processes in WSL 1.

They sound like an extremely useful construct.

Also, WSL 2 always felt like a "cheat" to me... is anyone else disappointed they went full-on VM and abandoned the original approach? Did small file performance ever get adequately addressed?

tjoff

8 days ago

I'm surprised they didn't go the WSL2 route from the start. Seems much easier to do.

But WSL is so cool and since I mostly run windows in VMs without nested virtualization support I've pretty much only used that one and am super thankful for it.

netbsdusers

8 days ago

I think (but am not sure) that WSL was a consolation prize of the cancelled Project Astoria, the initiative from the dying days of Windows Phone to support running Android apps on Windows. Implementing this with virtualisation would have been more painful and less practical on the smartphones of the day.

torginus

8 days ago

Honestly, before WSL 1 was a thing, Cygwin already existed, and it was good enough for most people.

tjoff

8 days ago

Cygwin is capable but I feel WSL is a total game changer.

PaulHoule

9 days ago

(1) The has been some convergence such as FUSE in Linux which lets you implement file systems in user space, Proton emulates NT very well, and

(2) Win NT’s approach to file systems makes many file system operations very slow which makes npm and other dev systems designed for the Unix system terribly slow in NT. Which is why Microsoft gave up on the otherwise excellent WSL1. If you were writing this kind of thing natively for Windows you would stuff blobs into SQLLite (e.g. a true “user space filesystem”) or ZIP files or some other container instead of stuffing 100,000 files in directories.

nullindividual

9 days ago

> NT’s approach to file systems makes many file system operations very slow

This is due to the file system filters. It shows when using DevDrive where there are significant performance improvements.

immibis

9 days ago

I wonder how much of Linux's performance is attributable to it not having such grand architectural visions (e.g. unified object manager) and therefore being able to optimize each specific code path more.

chasil

9 days ago

Many Linux systems on TPC.org are running on XFS (everything rhel-based). It's not simple, but it does appear to help SQL Server.

delta_p_delta_x

9 days ago

NT is why I like Windows so much and can't stand Unix-likes. It is object-oriented from top to bottom, and I'm glad in the 21st century PowerShell has continued that legacy.

But as someone who's used all versions of Windows since 95, this paragraph strikes me the most:

> What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.

I couldn't agree more. Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow as molasses. What I really want is a Windows 2000-esque UI with dense, desktop-focused UIs (for an example, look at Visual Studio 2022 which is the last bastion of the late 1990s-early 2000s-style fan-out toolbar design that still persists in Microsoft's products).

I want modern technologies from Windows 10 and 11 like UTF-8, SSD management, ClearType and high-quality typefaces, proper HiDPI scaling (something that took desktop Linux until this year to properly handle, and something that macOS doesn't actually do correctly despite appearing to do so), Windows 11's window management, and a deeper integration of .NET with Windows.

I'd like Microsoft to backport all that to the combined UI of Windows 2000 and Windows 7 (so probably Windows 7 with the 'Classic' theme). I couldn't care less about transparent menu bars. I don't want iOS-style 'switches'. I want clear tabs, radio buttons, checkboxes, and a slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups. I want the Windows 7-style control panel back.

wvenable

8 days ago

> Windows 11 is irritatingly macOS-like and for some reason

I bet dollars to donuts that all the designers who come up new Windows designs are using Macs.

The old Windows UI was designed out of painstaking end user testing which was famously responsible for the Start button.

SoothingSorbet

8 days ago

Indeed. And importantly, you could tell exactly which UI elements were which. It's sometimes genuinely difficult to tell if an element is text, a button, or a button disguised as a link on Windows 10/11.

nullindividual

9 days ago

> What I really want is a Windows 2000-esque UI with dense

Engineers like you and I want that. The common end user wants flashy, sleek, stylish (and apparently CandyCrush).

But don't forget that that 2000 UI was flashy, sleek, and stylish with it's fancy pants GDI+ and a mouse pointer with a drop shadow!

EvanAnderson

9 days ago

> The common end user wants flashy, sleek, stylish (and apparently CandyCrush).

Do they, though? I get the impression that nobody is actually testing with users. It seems more like UI developers want "flashy, sleek, stylish" and that's what's getting jammed down all our throats.

mattkevan

9 days ago

As a UI designer and developer, I would push the blame further along the stack and say that execs and shareholders want “flashy, sleek, stylish”, in the same way everything has to have AI jammed in now, lest the number start going down or not up quite as fast as hoped.

reisse

9 days ago

Ah, it's a cycle repeating itself. I remember when Microsoft first released XP it was considered bloated (UI-wise) compared to Windows 2000 and Windows 95/98/ME. Then Vista came and all of a sudden XP was in the limelight for being slim and fast!

aleph_minus_one

9 days ago

Even when Vista came, people told all the time that they consider Windows 2000 to be much less UI-bloated than Windows XP; it was just that of the "UI bloat evils" Windows XP was considered to be the lesser evil than Windows Vista. I really heard nobody saying that XP was slim and fast.

BTW: Windows 7 is another story: at that time PC magazines wrote deep analyses how some performance issues in the user interface code of Windows Vista that made Vista feel "sluggish" were fixed in Windows 7.

whoknowsidont

8 days ago

>It is object-oriented from top to bottom,

On what planet is this a good thing? What does this realistically and practically mean outside of some high level layer that provides syntax sugar for B2B dev's. Lord knows you better not be talking about COM.

I honestly only see these types of comments from people who do NOT do systems programming.

Melatonic

9 days ago

Agreed on you with everything here

That being said if you run something like Win10 LTSC (basically super stripped down win10 with no tracking and crapware) and turn off all window animations / shadows / etc you might be very surprised - it is snappy as hell. With a modern SSD stuff launches instantly and it is a totally different experience.

wkat4242

9 days ago

I run LTSC. You still get the tracking and some crapware sadly.

AshamedCaptain

9 days ago

> The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.

NT has been sluggish since _forever_. It is hardly a bloated GUI problem. On machines were 9x would literally fly NT would fail to boot due to low memory.

nullindividual

9 days ago

Not sure what NT4 systems you were dealing with, but I've dealt with ones thrashing the page file on spinning rust and the GUI is still responsive.

NT4 had a higher base RAM requirement than 9x. Significantly so.

AshamedCaptain

8 days ago

The point being, 3.x/9x and NT using the same GUI, yet NT consistently requiring up to 4 times more RAM. NT itself was ridiculously bloated, not the GUI.

ruthmarx

8 days ago

> NT is why I like Windows so much and can't stand Unix-likes. It is object-oriented from top to bottom,

This sounds like you are talking from a design perspective and the rest of your post seems to be from a usability perspective. Is this correct?

> Windows 11 is irritatingly macOS-like

MacOS is such an objectively inferior design paradigm, very frustration to use. It's Apple thinking 'different' for the sake of being different, not because it's good UI.

I only keep a W10 image around because it's still supported and W11 seems like a lot more work to beat into shape. OpenShell at least makes things much better.

EvanAnderson

9 days ago

I love that NT was actually designed. I don't necessarily like all of the design but I like that people actually thought about it.

jiripospisil

8 days ago

> Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow

Not sure about Windows but on macOS you can disable most of these animations - look for "Reduce motion" in Accessibility, the same setting is available on iOS/iPadOS. The result seems snappier.

pcwalton

9 days ago

> slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups

This is not possible (measure software blitting performance and you'll see), and for power reasons you wouldn't want to even if it were.

kev009

9 days ago

It's never been commonplace but can't you still run alternate shells (the Windows term for the GUI, not the command prompt in UNIX parlance)?

bragr

9 days ago

chasil

9 days ago

The wiki that you posted does not include busybox.

https://frippery.org/busybox/index.html

kev009

9 days ago

You didn't understand the parenthetical, busybox has no relation to the Windows shell.

chasil

8 days ago

It is a Windows shell. A POSIX Windows shell.

Such as should have been present at the beginning, not the OS/2 affliction.

bigstrat2003

8 days ago

"shell" in Windows roughly refers to the GUI, not a CLI host like it does in Unix.

kev009

8 days ago

Unfortunately it is still clear you don't understand my comment nor the linked wikipedia page at all. Windows Shell isn't the same use of the word you are overloading here in this thread and you are out in left field from what everyone else is talking about. Maybe revisit the wikipedia page and read a little more, look at the project descriptions&screencaps, and it will make sense to you.

EricE

9 days ago

Indeed the first thing I do on a new Windows install is load Open Shell.

p_l

9 days ago

You still can, and it's even exposed specifically for making constrained setups (though not everyone knows to use it)

chasil

9 days ago

Busybox has a great shell in the Windows port.

It calls itself "bash" but it is really the Almquist shell with a bit of added bash syntactic sugar. It does not support arrays.

https://frippery.org/busybox/index.html

markus_zhang

9 days ago

Win 2000 was the pinnacle. I stuck to it until WinXP was almost out of breath and reluctantly moved to it. Everything afterwards is pretty meh.

ruthmarx

8 days ago

Windows 7 was pretty great.

chriscappuccio

8 days ago

Windows was great at running word processors. BSD and Linux were great as internet scale servers. It was until Microsoft tried running Hotmail on NT that they had any idea there was a problem. Microsoft used this experience to fix problms that ultimately made Windows better for all users across many use cases.

All the talk here about how Windows had a better architecture into he beginning conveniently avoids the fact that windows was well known for being over-designed while delivering much less than its counterparts in the networking arena for a long time.

Its not wrong to admire what windows got right, but Unix got so much right by putting attention where it was needed.

ExoticPearTree

8 days ago

I have been using Windows and Linux for about 20+ years. Still remember the Linux Kernel 2.0 days and Windows NT 4. And I have to admit that I am more familiar with the Linux kernel than the Windows one.

Now, that being said, I think the Windows kernel sounded better on paper, but in reality Windows was never as stable as Linux (even in it's early days) doing everyday tasks (file sharing, web/mail/dns server etc.).

Even to this day, the Unix philosophy of doing one thing and doing it well stands: maybe the Linux kernel wasn't as fancy as the Windows one, but it did what it was supposed to do very well.

torginus

8 days ago

Windows is way more stable than Linux when you consider the entire desktop stack. The number of crashes, and data loss bugs I've experienced when using Linux over the years constantly puts me off from using it as a daily driver

1970-01-01

8 days ago

I remember when 'Linux doesn't get viruses' was a true statement. It did things well because it wasn't nearly as popular as WinNT (because it wasn't nearly as user friendly as WinNT) and you needed an experienced administrator to get anything important running..

ssrc

8 days ago

I remember reading those books (ok, it was the 4.3 BSD edition instead of 4.4) alongside Bach's "The Design of the Unix Operating System" and Uresh Vahalia's "UNIX internals: the new frontiers" (1996). I recommend "UNIX internals". It's very good and not as well known as the others.

walki

8 days ago

I feel like the NT kernel is in maintenance only mode and will eventually be replaced by the Linux kernel. I submitted a Windows kernel bug to Microsoft a few years ago and even though they acknowledged the bug the issue was closed as a "won't fix" because fixing the bug would require making backwards incompatible changes.

Windows currently has a significant scaling issue because of its Processor Groups design, it is actually more of an ugly hack that was added to Windows 7 to support more than 64 threads. Everyone makes bad decisions when developing a kernel, the difference between the Windows NT kernel and the Linux kernel is that fundamental design flaws tend to get eventually fixed in the Linux kernel while they rarely get fixed in the Windows NT kernel.

nullindividual

8 days ago

NT kernel still gets improvements. Think IoRing (copy of io_uring, but for file reads only) which is a new feature.

I think things like Credential Guard, various virtualization (security-related, not VM-related) are relatively new kernel-integrated features, etc.

Kernel bugs that need to exist because of backwards compat are going to continue to exist since backwards compat is a design goal of Windows.

JackSlateur

8 days ago

I have the same feeling

Windows is more and more based on virtualization

And the other hand, more and more microsoft stuff is Linux native

It would not surprise me if Linux runs every windows, somewhere far deep, in the next decades

More hybridations are probably coming, but where will it stop ? And why ?

netbsdusers

8 days ago

I think rumours of NT's terminal illness have been greatly exaggerated. There are numerous new developments I am hearing about from it, like the adoption of RCU and the memory partitions.

It's not clear to me how processor groups inhibit scaling. It's even sensible to constrict the movement of threads willy-nilly between cores in a lot of cases (because of NUMA locality, caches, etc.) And it looks like there's an option to not confine your program to a single processor group, too.

phendrenad2

7 days ago

Running all NT applications in a virtualization layer over top of the Linux kernel would surely impose a performance penalty, and for what, so that someone can run high-performance Linux applications on Windows? It's a bewildering line of reasoning, to be sure.

methods21

7 days ago

At the time, NT being able to run on multiple architectures (e.g Alpha), was rather impressive esp. at the time. Believe this was based from a lot of the knowledge of the former DEC team working on NT. Reading the comments here, esp. about drivers, it makes me now think how much engineering went towards this and perhaps the limitations around the driver architecture, that could have been put towards a stronger driver design AND hot add of certain SW, as I probably lost a year of my life waiting on NT reboots.

userbinator

8 days ago

Controversial opinion: DOS-based Windows/386 family (including 3.11 enhanced mode, 95, 98, up to the ill-fated ME) are even more advanced than NT. While Unix and NT, despite how different they are in the details are still "traditional" OSes, the lineage that started at Windows/386 are hypervisors that run VMs under hardware-assisted virtualisation. IMHO not enough has been written about the details of this architecture, compared to Unix and NT. It's a hypervisor that passes through most accesses to the hardware by default, which gave it a bad reputation for stability and security, but also a great reputation for performance and efficiency.

nullindividual

8 days ago

> Windows/386 are hypervisors that run VMs under hardware-assisted virtualisation.

Not really. There was no Ring -1 (hypervisor), no hardware-assisted virtualization as we use the term today. On Windows/386, it ran in Ring 0.

Virtual 8086 mode was leveraged via the NTVDM, shipping with the first release of NT.

userbinator

8 days ago

No, it was a real hypervisor, running VMs (and DOS) in ring 3. They didn't call it VMM32 for nothing.

nullindividual

8 days ago

Raymond Chen says "Not true, but true enough" [0]. But to your original claim of more advanced than NT... nah.

[0] https://web.archive.org/web/20111110161740/http://blogs.msdn...

[Extra] https://devblogs.microsoft.com/oldnewthing/20130208-00/?p=53...

userbinator

7 days ago

Raymond still works for Microsoft so he has to toe the company line.

On the other hand, articles from people like Andrew Schulman and Matt Pietrek (before he got bought out by MS) are far more explicit about the truth.

As for being more advanced than NT, it depends what you consider "more advanced"; a traditional OS, or a hypervisor? It certainly takes some effort to wrap your head around the VxD driver model of the latter, while the former is quite straightforward.

wmf

8 days ago

Is that architecture actually good or is it just complex? If it's more advanced, why did MS replace it with NT? It has long been known that you can trade off performance and protection; in retrospect 95/98 just wasn't reliable enough.

tracker1

8 days ago

I remember using NT4 for web dev work in the later 90's... was kind of funny how many times browsers (Netscape especially) would crash from various memory errors that never happened in Win9x... how many of those were early exploits/exploitable issues. That and dealing with Flash around that time, and finding out I can access the filesystem.

I pretty much ran NT/Win2K at home for all internet stuff from then on, without flash at all. I do miss the shell replacement I used back in those days, I don't remember the name, but also remember Windowblinds.

wmf

8 days ago

Yep, the Mac was the same way. Plenty of apps would dereference null pointers and just keep going. You'd reboot every day or two to clear out the corruption.

userbinator

8 days ago

I think it's because that architecture was less understood than the traditional OS model at the time; and they could've easily virtualised more of the hardware and gradually made passthrough not the default, eventually arriving at something like Xen and other bare-metal hypervisors that later became common.

...and as the sibling comment alludes to, MS eventually adopted that architecture somewhat with Hyper-V and the VBS feature, but now running NT inside of the hypervisor instead of protected-mode DOS.

phendrenad2

7 days ago

The DOS-based Windowses were really impressive when you consider how horrible x86 was back then.

omnibrain

8 days ago

Isn’t it similar when you use Hyper-V?

lukeh

8 days ago

No one was using X.500 for user accounts on Solaris, until LDAP and RFC 2307 came along. And at that point hardly anyone was using X.500. A bit more research would have mentioned NIS.

Circlecrypto2

9 days ago

A great article that taught me a lot of history. As a long time Linux user and advocate for that history, I learned there is actually a lot to appreciate from the work that went into NT.

RachelF

9 days ago

The original NT was a great design, built by people who knew what they were doing and had done it before (for VMS). It was superior to the Unix design when it came out, benefiting from the knowledge of 15 years.

I worked on kernel drivers starting in with NT 3.5. However, over the years, the kernel has become bloated. The bloat is both in code and in architecture.

I guess this is inevitable as the original team has long gone, and it is now too large for anyone to understand the whole thing.

jonathanyc

8 days ago

Great article, especially loved the focus on history! I’ve subscribed.

> Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.

I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. I’ve heard lots of opinions but none from a comparative perspective looking at the NT vs. UNIX kernels.

benchloftbrunch

7 days ago

Probably because of the bad example set by XP Home (which deliberately crippled a lot of NT's advanced security features) having insecure defaults, and most people not bothering and just running as admin all the time.

parl_match

9 days ago

> Unix’s history is long—much longer than NT’s

Fun fact: NT is a spiritual (and in some cases, literal) successor of VMS, which itself is a direct descendant of the RSX family of operating systems, which are themselves a descendant of a process control family of task runners from 1960. Unix goes back to 1964 - Multics.

Although yeah, Unix definitely has a much longer unbroken chain.

stevekemp

9 days ago

I wonder how many of the features which other operating systems got much later, such as the unified buffer cache, were due to worries of software patents?

AdeptusAquinas

8 days ago

Could someone explain the 'sluggish ui responsiveness' talked about in the conclusion? I've never experienced it in 11, 10, 8 or 7 etc - but maybe thats because my windows machines are always gaming machines and have a contemporary powerful graphics card. I've used a mac pro for work a couple of times and never noticed that being snippier than my home machine.

IgorPartola

8 days ago

I think this is taking about a much older version of Windows, such as XP and Vista. Vista was particularly bad.

user

8 days ago

[deleted]

pseudosavant

8 days ago

Great article that is largely on point. I find it funny that it ends with a bit about how the Windows UI might kill off the Windows OS.

Predicting the imminent demise of Windows is as common, and accurate, of a take as saying this is the year of Linux on the desktop or that Linux takes over PC gaming.

pid1wow

8 days ago

They say Windows has a more advanced security system, but what does that actually mean in practice? Okay, it has everything is an object, then you can just set permissions on objects. Okay, the OS just has to check if you have permission to an object before you access that object.

What if there are just a billion objects and you can't tell which ones need which permission, as an administrator. I couldn't tell if this example actually exists from the article as it only talks abstractly about the subject. But Windows security stuff just sounds like a typical convoluted system that never worked. This is probably one of the one places where UN*X is better off, not that it's any good since it doesn't support any use case other than separating the web server process from the DNS server process, but that it's very simple.

What if the objects do not describe the items I need to protect in sufficient detail? How many privilege escalation / lateral movement vulns were there in Windows vs any UN*X?

user

8 days ago

[deleted]

user

8 days ago

[deleted]

IgorPartola

8 days ago

What I don’t see in the comments here is the argument I remember hearing in the late 90s and early 2000s: that Unix is simpler than Windows. I certainly feel like it was easier for me to grasp the POSIX API compared to what Windows was doing at the time.

gunapologist99

8 days ago

Excellent article. The admittedly abbreviated history unfortunately completely missed the shared (and contentious) history between IBM's 16- and 32-bit OS/2 in the run-up to 32-bit Windows NT.

jhallenworld

9 days ago

NT has its object manager.. the problem with it is visibility. Yes, object manager type functionality was bolted-on to UNIX, but at least it's all visible in the filesystem. In NT, you need a special utility WinObj to browse it.

virgulino

8 days ago

Inside Windows NT, the 1st edition, by Helen Custer, is one of my all-time favorite books! A forgotten gem. It's good to see it being praised.

p0seidon

9 days ago

This is such a well-written read, just so insightful and broad in its knowledge. I learned a lot, thank you (loved NT at that time - now I know why).

desdenova

9 days ago

Unfortunately NT doesn't have any usable distribution, so it's still a theoretical OS design.

MaxGripe

8 days ago

The sluggishness of the system on new hardware is an accurate observation, but I think the author should also take a look at macOS or popular Linux distros, where it's significantly worse

MaxGripe

8 days ago

„I think Apple is AmAzInG so I will downvote you”

btbuilder

8 days ago

  input_string = "VMS"
  output_string = ''.join([chr(ord(char) + 1) for char in input_string])
  print(output_string)

fargle

8 days ago

this is a lovely and well written article, but i have to quibble with the conclusion. i agree that "it’s not clear to me that NT is truly more advanced". i also agree with the statement "It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems"

but i don't agree with is that it was ever more advanced or "better" (in some hypothetical single-dimensional metric). the problem is that all that high minded architectural art gets in the way of practical things:

    - performance, project: (m$ shipping product, maintenance, adding features, designs, agility, fixing bugs)

    - performance, execution (anyone's code running fast)

    - performance, market (users adopting it, building new unpredictable things)
it's like minix vs. linux again. sure minux was at the time in all theoretical ways superior to the massive hack of linux. except that, of course, in practice theory is not the same as practice.

in the mid 2000-2010s my workplace had a source license for the entire Windows codebase (view only). when the API docs and the KB articles don't explain it, we could dive deeper. i have to say i was blown away and very surprised by "NT" - given it's abysmal reliability i was expecting MS-DOS/Win 3.x level hackery everywhere. instead i got a good idea of Dave Cutler and VMS - it was positively uniformly solid, pedestrian, uniform and explicit. to a highly disgusting degree: 20-30 lines of code to call a function to create something that would be 1-2 lines of code in a UNIX (sure we cheat and overload the return with error codes and status and successful object id being returned - i mean they shouldn't overlap, right? probably? yolo!).

in NT you create a structure containing the options, maybe call a helper function to default that option structure, call the actual function, if it fails because of limits, it reports how much you need then you go back and re-allocate what you need and call it again. if you need the new API, you call someReallyLongFunctionEx, making sure to remember to set the version flag in the options struct to the correct size of the new updated option version. nobody is sure what happens if getSomeMinorObjectEx() takes a getSomeMinorObjectParamEx option strucure that is the same size as the original getSomeMinorObjectParam struct but it would probably involve calling setSomeMinorObjectParamExParamVersion() or getObjectParamStructVersionManager()->SelectVersionEx(versionSelectParameterEx). every one is slightly different, but they are all the same vibe.

if NT was actual architecture, it would definitely be "brutalist" [1]

the core of NT is the antithesis of the New Jersey (Berkeley/BSD) [2] style.

the problem is that all companies, both micro$oft and commercial companies trying to use it, have finite resources. the high-architect brutalist style works for VMS and NT, but only at extreme cost. the fact that it's tricky to get signals right doesn't slow most UNIX developers down, most of the time, except for when it does. and when it does, a buggy, but 80%, solution is but a wrong stackoverlflow answer away. the fact that creating a single object takes a page of code and doing anything real takes an architecture committee and a half-dozen objects that each take a page of (very boring) code, does slow everyone down, all the time.

it's clear to me, just reading the code, that the MBA's running micro$oft eventually figured that out and decided, outside the really core kernel, not to adopt either the MIT/Stanford or the New Jersey/Berkeley style - instead they would go with "offshore low bidder" style for the rest of whatever else was bolted on since 1995. dave cutler probably now spends the rest of his life really irritated whenever his laptop keeps crashing because of this crap. it's not even good crap code. it's absolutely terrible; the contrast is striking.

then another lesson (pay attention systemd people), is that buggy, over-complicated, user mode stuff and ancillary services like control-panel, gui, update system, etc. can sink even the best most reliable kernel.

then you get to sockets, and realize that the internet was a "BIG DEAL" in the 1990s.

ooof, microsoft. winsock.

then you have the other, other, really giant failure. openness. open to share the actual code with the users is #1. #2 is letting them show the way and contribute. the micro$oft way was violent hatred to both ideas. oh, well. you could still be a commercial company that owns the copyright and not hide the, good or bad, code from your developers. MBAAs (MBA Assholes) strike again.

[1] https://en.wikipedia.org/wiki/Brutalist_architecture [2] https://en.wikipedia.org/wiki/Worse_is_better

chasil

9 days ago

I would imagine that Windows was closer to UNIX than VMS was, and there were several POSIX ports to VMS.

VMS POSIX ports:

https://en.m.wikipedia.org/wiki/OpenVMS#POSIX_compatibility

VMS influence on Windows:

https://en.m.wikipedia.org/wiki/Windows_NT#Development

"Microsoft hired a group of developers from Digital Equipment Corporation led by Dave Cutler to build Windows NT, and many elements of the design reflect earlier DEC experience with Cutler's VMS, VAXELN and RSX-11, but also an unreleased object-based operating system developed by Cutler at Digital codenamed MICA."

Windows POSIX layer:

https://en.m.wikipedia.org/wiki/Microsoft_POSIX_subsystem

Xenix:

https://en.m.wikipedia.org/wiki/Xenix

"Tandy more than doubled the Xenix installed base when it made TRS-Xenix the default operating system for its TRS-80 Model 16 68000-based computer in early 1983, and was the largest Unix vendor in 1984."

EDIT: AT&T first had an SMP-capable UNIX in 1977.

"Any configuration supplied by Sperry, including multiprocessor ones, can run the UNIX system."

https://www.bell-labs.com/usr/dmr/www/otherports/newp.pdf

UNIX did not originally use an MMU:

"Back around 1970-71, Unix on the PDP-11/20 ran on hardware that not only did not support virtual memory, but didn't support any kind of hardware memory mapping or protection, for example against writing over the kernel. This was a pain, because we were using the machine for multiple users. When anyone was working on a program, it was considered a courtesy to yell "A.OUT?" before trying it, to warn others to save whatever they were editing."

https://www.bell-labs.com/usr/dmr/www/odd.html

Shared memory was "bolted on" with Columbus UNIX:

https://en.m.wikipedia.org/wiki/CB_UNIX

...POSIX implements setfacl.

runjake

9 days ago

The NT kernel is pretty nifty, albeit an aging design.

My issue with Windows as an OS, is that there's so much cruft, often adopted from Microsoft's older OSes, stacked on top of the NT kernel effectively circumventing it's design.

You frequently see examples of this in vulnerability write-ups: "NT has mechanisms in place to secure $thing, but unfortunately, this upper level component effectively bypasses those protections".

I know Microsoft would like to, if they considered it "possible", but they really need to move away from the Win32 and MS-DOS paradigms and rethink a more native OS design based solely on NT and evolving principles.

robotnikman

9 days ago

The backwards compatibility though is one of the major features of windows as an OS. That fact that a company can still load some software made 20 years ago developed by a company that is no longer in business is pretty cool (and I've worked at such places using ancient software on some windows box, sometimes there's no time or money for alternatives)

flohofwoe

8 days ago

If you look at more recent Windows APIs, I'm really thankful that the traditional Win32 APIs still work. On average the older APIs are much nicer to work with.

alt227

8 days ago

> On average the older APIs are much nicer to work with

IMO this is because they are better written, by people who had deeper understanding of the entire OS picture and cared more about writing performant and maintainable code.

xeonmc

8 days ago

Well-illustrated in the article “How Microsoft Lost the API War”[0]:

    The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere (well, on any Windows box). 
    The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. 
    The Raymond Chen camp is all about consolidation. Please, don’t make things any worse, let’s just keep making what we already have still work. 
    The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.  
[0] https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...

jimbokun

8 days ago

> making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve.

I feel the same way about Spring development for Java.

Also reminds me of:

https://www.infoq.com/presentations/Simple-Made-Easy/

bboygravity

8 days ago

Nicer to work with?

I can't think of any worse API in the entire world?

sumtechguy

8 days ago

The 'win32' AOU calls are decent relative to themselves. If you understand their madness. For every positive call there is usually an anti call and a worker call. Open a handle, use the handle with its helper calls, close the handle. You must understand their 3 step pattern to all work. There are a few exceptions. But usually those are the sort of things where the system has given you a handle and you must deal with it with your calls to helper calls. In those cases usually the system handles the open/close part.

Now you get into the COM/.NET/UWP stuff and the API gets a bit more fuzzy on that pattern. The win32 API is fairly consistent in its madness. So are the other API stacks they have come up with. But only in their own mad world.

Also out of the documentation the older win32 docs are actually usually decently written and self consistent. The newer stuff not so much.

If you have the displeasure of mixing APIs you are in for a rough ride as all of their calling semantics are different.

user

8 days ago

[deleted]

flohofwoe

8 days ago

There are some higher level COM APIs which are not exactly great, but the core Win32 DLL APIs (kernel32, user32, gdi32) are quite good, also the DirectX APIs after ca 2002 (e.g. since D3D9) - because even though the DirectX APIs are built on top of COM, they are designed in a somewhat sane way (similar to how there are 'sane' and 'messy' C++ APIs).

Especially UWP and its successors (I think it's called WinRT now?) are objectively terrible.

OvbiousError

8 days ago

I've had to work with api functions like https://learn.microsoft.com/en-us/windows/win32/api/winuser/... and friends. It was by far the most unpleasant api I've ever worked with.

frabert

8 days ago

I think that particular pattern is a perfectly reasonable way to let the user ingest an arbitrarily long list of objects without having to do any preallocations -- or indeed, any allocations at all.

wruza

8 days ago

Because allocating well under a hundred handles is a biggest problem we have.

WinAPI is awful to work with for reasons above anyone’s comprehension. It’s just legacy riding on legacy, with initial legacy made by someone 50% following stupid patterns from the previous 8/16 bit decade and 50% high on mushrooms. The first thing you do with WinAPI is abstracting it tf away from your face.

mananaysiempre

8 days ago

Yes, but the inversion of control is unpleasant to deal with—compare Find{First,Next}File which don’t require that.

wongarsu

8 days ago

Which is a pattern that also exists in the Win32 API, for example in the handle = CreateToolhelp32Snapshot(), Thread32First(handle, out), while Thread32Next(handle, out) API for iterating over a process's threads.

I also find EnumChildWindows pretty wacky. It's not too bad to use, but it's a weird pattern and a pattern that Windows has also moved away from since XP.

https://learn.microsoft.com/en-us/windows/win32/toolhelp/tra...

sirwhinesalot

8 days ago

The various WinRT APIs are even worse. At least Win32 is "battle tested"

pjmlp

8 days ago

X Windows and Motif, for example.

marcosdumay

8 days ago

So, you don't use the newer Windows APIs?

wruza

8 days ago

Yeah, I like the smell of cbSize in the RegisterClassExA. Smells like… WNDCLASSEXA.lpfnWndProc.

Nothing can beat WinAPI in nicety to work with, just look at this monstrosity:

  gtk_window_new(GTK_WINDOW_TOPLEVEL);

sirwhinesalot

7 days ago

On the other hand GTK has been rewritten 3 times and each new version deprecates a bunch of stuff, making it an absolute nightmare for apps to migrate.

wruza

7 days ago

Why can’t they stick an app to a specific gtk version? They were fine with what they started it in, what is the reason to migrate?

If the answer is ver++ anxiety, the problem is self-imposed (still better than using winapi).

sirwhinesalot

7 days ago

Needing new features like wayland support? Better DE integration? Distros removing old versions? What an odd question.

wruza

7 days ago

What exactly does “wayland support” do to an existing x11 app? How they managed to ship either the app or the wayland without mutual “support” before?

What’s DE integration apart from tray and notifications? Why does an app need any DE integration beyond a tray icon?

These questions are valid, not odd.

Distros removing versions is a distro’s problem. Most gtk versions are installable on popular distros, afaiu.

Anyway, I find most of these points are moot, because they mirror winapi. Gdi -> directx, fonts scaling, need for msvcrts and so on. Looks like an argument for the sake of argument. You can’t make a modern app with winapi either, it will be a blurry non-integrated win2k window like device manager or advanced properties. The difference is you can’t migrate them at all, even MS can not.

sirwhinesalot

6 days ago

All untrue from my perspective so I'm not sure which parallel universes we live in.

sedatk

9 days ago

That and it's 30+ years (NT was released in 1993). Backwards compatibility is certainly one of the greatest business value Microsoft provides to its customers.

MarkSweep

9 days ago

If you include the ability of 32-bit versions of Windows to run 16-but Windows and DOS applications with NTVDM, it is more like 40+ years.

https://en.wikipedia.org/wiki/Virtual_DOS_machine

(Math on the 40 years: windows 1.0 was released in 1985, the last consumer version of Windows 10 (which is the last Windows NT version to support 32-bit install and thus NTVDM) goes out of support in 2025. DOS was first released in 1981, more than 40 years ago. I don’t know when it was released, but I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186)

winter_blue

8 days ago

> I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186

It’s amazing that stuff still runs on Windows 10. I’m guessing Windows 10 has a VM layer both for 32-bit and 16-bit Windows + DOS apps?

JonathonW

8 days ago

Windows 10 only does 16-bit DOS and Windows apps on the 32-bit version of Windows 10, so it only has a VM layer for those 16-bit apps. (On x86, NTVDM uses the processor's virtual 8086 mode to do its thing; that doesn't exist in 64-bit mode on x86-64 and MS didn't build an emulator for x86-64 like they did for some other architectures back in the NT on Alpha/PowerPC era, so no DOS or 16-bit Windows apps on 64-bit Windows at all.)

sedatk

9 days ago

True. I just assumed that 16-bit support got dropped since Windows 11 was 64-bit only.

EvanAnderson

8 days ago

Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily). It has been unofficially built for 64-bit versions of Windows as a proof-of-concept: https://github.com/leecher1337/ntvdmx64

badgersnake

8 days ago

That’s okay, and if people want to test their specific use case on that and use it then great.

It’s a pretty different amount of effort to Microsoft having to do a full 16 bit regression suite and make everything work and then support it for the fewer and fewer customers using it. And you can run a 32 bit windows in a VM pretty easily if you really want to.

Timwi

8 days ago

Or you can run 16-bit Windows 3.1 in DOSBox.

badgersnake

8 days ago

Sure, but again that’s on you to test and support.

skissane

8 days ago

> Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily).

I recently discovered that Windows 6.2 (more commonly known as Windows 8) added an export to Kernel32.dll called NtVdm64CreateProcessInternalW.

https://www.geoffchappell.com/studies/windows/win32/kernel32...

Not sure exactly what it does (other than obviously being some variation on process creation), but the existence of a function whose name starts with NtVdm64 suggests to me that maybe Microsoft actually did have some plan to offer a 64-bit NTVDM, but only abandoned it after they’d already implemented this function.

dartharva

8 days ago

But only to a degree, right? Only the last two decades of software is what the OS ideally needs to support, beyond that you can just use emulators.

badsectoracula

8 days ago

Software is written against APIs, not years, so the problem with this sort of thinking is that software written -say- 10 years ago might still be using APIs from more than 20 years ago, so if you decide to break/remove/whatever the more-than-20-year-ago APIs you not only break the more-than-20-year-ago software but also the 10 year old software that used those APIs - as well as any other software, older or newer, that did the same.

(also i'm using "API" for convenience here, replace it with anything that can affect backwards compatibility)

EDIT: simple example in practice: WinExec was deprecated when Windows switched from 16bit to 32bit several decades ago, yet programs are still using it to this day.

da_chicken

8 days ago

Pretty much the only 16-bit software that people commonly encounter is an old setup program.

For a very long time those were all 16-bit because they didn't need the address space and they were typically smaller when compiled. This means that a lot of 32-bit software from the late 90s that would otherwise work fine is locked inside a 16-bit InstallShield box.

aleph_minus_one

8 days ago

> Pretty much the only 16-bit software that people commonly encounter is an old setup program.

I know quite a lot of people who are still quite fond of some old 16-bit Windows games which - for this "bitness reason" - don't work on modern 64 bit versions of Windows anymore. People who grew up with these Windows versions are quite nostalgic about applications/games from "their" time, and still use/play them (similar to how C64/Amiga/Atari fans are about "their" system).

NegativeLatency

8 days ago

Maybe, but your app could also be an interface to some super expensive scientific/industrial equipment that does weird IO or something.

ozim

9 days ago

People tend to forget that it already is 2024.

xattt

9 days ago

Short of driver troubles at the jump from Win 9x to 2k/XP, and the shedding of Win16 compatibility layers at the time of release of Win XP x64, backwards compatibility had always been baked into Windows. I don’t know if there was any loss of compatibility during the MS-DOS days either.

It’s just expected at this point.

anthk

8 days ago

On DOS, if you borrow ReactOS' NTVDM under XP/2003 and Maybe Vista/7 under 32 bit (IDK about 64 bit binaries), you can run DOS games in a much better way than Windows' counterpart.

ExoticPearTree

8 days ago

Not long ago, it was posted here a link to a job advert for the german railway looking for a Win 3.11 specialist.

As I see it, the problem is the laziness/cheapness of companies when it comes to upgrades and vendor's reluctance to get rid of dead stuff for fear of losing business.

APIs could be deprecated/updated at set intervals, like Current -2/-3 versions back and be done with it.

wongarsu

8 days ago

Lots of hardware is used for multiple decades, but has software that is built once and doesn't get continuous updates.

That isn't necessarily laziness, it's a mindset thing. Traditional hardware companies are used to a mindset where they design something once, make and sell it for a decade, and the customer will replace it after 20 years of use. They have customer support for those 30 years, but software is treated as part of that design process.

That makes a modern OS that can support the APIs of 30 year old software (so 40 year old APIs) valuable to businesses. If you only want to support 3 versions that's valid, but you will lose those customers to a competitor who has better backwards compatibility

runjake

9 days ago

  > The backwards compatibility though is one of the major features of windows as an OS.
It is. That's even been stated by MSFT leadership time and time again.

But at what point does that become a liability?

I'm arguing that point was about 15-20 years ago.

efitz

8 days ago

There is another very active article on HN today about the launch of the new Apple iPhone 16 models.

The top discussion thread on that post is about “my old iPhone $version is good enough, why would I upgrade”.

It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

For Microsoft, the driver for backwards compatibility is economic: Microsoft wants people to buy new Windows, but in order to do that, they have to (1) convince customers that all their existing stuff is going to continue to work, and (2) convince developers that they don’t have to rewrite (or even recompile) all their stuff whenever there’s a new version of Windows.

Objectively, it seems like Microsoft made the right decision, based on revenue over the decades.

Full disclosure: I worked for Microsoft for 17 years, mostly in and around Windows, but left over a decade ago.

aleph_minus_one

8 days ago

> It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

Not concerning the iPhone, but in general tech people tend to be very vocal about not updating when they feel that the new product introduces some new spying features over the old one, or when they feel that the new product worsens what they love about the existing product (there, their taste is often very different from the "typical customer").

ruthmarx

8 days ago

> It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest”

This is almost never a technical decision, but a 'showing off' decision IMO.

HPsquared

8 days ago

It's a "fun toys" decision.

ruthmarx

8 days ago

Often one and the same.

wvenable

8 days ago

It is not a liability because most of what you are talking about is just compatibility not backwards compatibility. What makes an operating system Windows? Fundamentally it is something that runs Windows apps. Windows apps existed 15-20 years ago as much as they exist today. If you make an OS that doesn't run Windows apps then it just isn't Windows anymore.

The little weird things that exist due to backwards compatibility really don't matter. They're not harming anything.

dgfitz

9 days ago

New frameworks have vulnerabilities. Old OS flavors have vulnerabilities. OpenSSh keeps making the news for vulnerabilities.

I’d argue that software is never finished, only abandoned, and I absolutely did not generate that quote.

Stop. Just stop.

anthk

8 days ago

>OpenSSH

Yes, just stop... with the bullshit. OpenBSD didn't make vulnerabilities. Foreign Linux distros (OpenSSH comes from OpenBSD, and they release a portable tgz too) adding non-core features and libraries did.

johannes1234321

9 days ago

It is a great achievement. But the question is: Is it really relevant? Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.

wvenable

9 days ago

> Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

A huge amount of the compatibility stuff is already moved out into separate code that isn't loaded unless needed.

The problem too, though, is users don't want independent subsystems -- they want their OS to operate as a singular environment. Raymond Chen has mentioned this a few times on his blog when this sort of thing comes up.

Backwards compatibility also really isn't the issue that people seem to think it is.

dredmorbius

8 days ago

Independent subsystems need not be independent subsystems that the user must manage manually.

The k8s / containers world on Linux ... approaches ... this. Right now that's still somewhat manual, but the idea that a given application might fire off with the environment it needs without layering the rest of the system with those compatibility requirements, and also, incidentally, sandboxing those apps from the rest of the system (specific interactions excepted) would permit both forward advance and backwards compatibility.

A friend working at a virtualisation start-up back in the aughts told of one of the founders who'd worked for the guy who'd created BCPL, the programming language which preceded B, and later C. Turns out that when automotive engineers were starting to look into automated automobile controls, in the 1970s, C was considered too heavy-weight, and the systems were imploemented in BCPL. Some forty years later, the systems were still running, in BCPL, over multiple levels of emulation (at least two, possibly more, as I heard it). And, of course, faster than in the original bare-metal implementations.

Emulation/virtualisation is actually a pretty good compatibility solution.

wvenable

8 days ago

Users don't want sandboxing! It's frustrating enough on iOS and Android. They want to be able to cut and paste, have all their files in one place, open files in multiple applications at the same time, have plugins, etc.

Having compatibility requirements is almost the definition of an operating system.

If you bundle every application with basically the entire OS needed to run them then what exactly have you created?

dredmorbius

8 days ago

There are a relatively limited set of high-value target platforms: MS DOS (still in some use), Win95, WinNT and successor versions. Perhaps additionally a few Linux or BSD variants.

Note that it's possible to share some of that infrastructure by various mechanisms (e.g., union mounts, presumably read-only), so that even where you want apps sandboxed from one another, they can share OS-level resources (kernel, drivers, libraries).

At a user level, sandboxing presumes some shared file space, as in "My Files", or shared download, or other spaces.

Drag-and-drop through the GUI itself would tend to be independent of file-based access, I'd hope.

wvenable

8 days ago

What is gained by this? What would you get by virtualizing a WinAPI environment for app in Windows? (MS DOS compatibility is already gone from Windows). You get a whole bunch of indirection and solve a problem that doesn't exist.

dredmorbius

8 days ago

Obvious obvious advantage is obviously obvious: the ability to run software which is either obsolete, incompatible with your present system, or simply not trusted.

In my own case, I'd find benefits to spinning up, say, qemu running FreeDOS, WinNT, or various Unixen. Total overhead is low, and I get access to ancient software or data formats. Most offer shared data access through drive mapping, networking, Samba shares, etc.

That's not what I'd suggested above as an integrated solution, but could easily become part of the foundation for something along those lines. Again, Kubernetes or other jail-based solutions would work where you need a different flavour that's compatible with your host OS kernel. Where different kernels or host architectures are needed, you'll want more comprehensive virtualisation.

wvenable

8 days ago

As long as you ensure compatibility then software doesn't have to be obsolete or incompatible. The Windows API is so stable that it's the most stable API available for Linux.

I can already run VMs and that seems like a more total solution. To have an integrated solution you would need cooperation that you can't get from obsolete systems. I can run Windows XP in a VM. But if I want to run a virtualized Windows XP application seamlessly integrated into my desktop then I'm going need a Windows XP that is built to do that.

dredmorbius

8 days ago

Compatibility comes with costs:

- Fundamental prerequisites cannot be changed or abandoned, even where they impose limitations on the overall platform.

- System complexity increases, as multiple fixed points must be maintained, regressions checked, and where those points introduce security issues, inevitable weaknesses entailed.

- Running software which presumed non-networked hosts, or a far friendlier network, tend to play poorly in today's word. Well over a decade ago, a co-worker who'd spun up a Windows VM to run Windows Explorer for some corporate intranet site or another noted that the VM was corrupted within the five minutes or so it was live within the corporate LAN. At least it was a VM (and from a static disk image). Jails and VMs isolate such components and tune exposure amongst them.

What you and I can, will, and do actually do, which is to spin up VMs as we need them for specific tasks, is viable for a minuscule set of people, most of whom lack fundamental literacy let alone advanced technical computer competency.

The reason for making such capabilities automated within the host OS is so that those people can have access to the software, systems, and/or data they need, without needing to think about, or even be aware of how or that it's being implemented.

I've commented and posted about the competency of the average person as regards computers and literacy. It's much lower than you're likely to have realised:

The tyranny of the minimum viable user: <https://web.archive.org/web/20240000000000*/https://old.redd...>

Adult literacy in the United States: <https://nces.ed.gov/pubs2019/2019179/index.asp> <https://news.ycombinator.com/item?id=29734146>

And no, I'm not deriding those who don't know. I've come to accept them as part of the technological landscape. A part I really wish weren't so inept, but wishing won't change it. At the same time, the MVU imposes costs on the small, though highly capable, set of much more adept technologists.

nitwit005

8 days ago

Generally speaking, the waste is only hard disk space. If no one ever loads some old DLL, it just sits there.

johannes1234321

8 days ago

Nobody loads it, but the attacker. Either via a specially crafted program or via some COM service invoked from a Word document or something.

jimbokun

8 days ago

By moving the Win32 API onto Windows NT kernel, isn't that essentially what Microsoft did?

486sx33

9 days ago

I think that VM software like Parallels has shown us that we are just now at the point where VMs can handle it all and feel native. Certainly NT could use a re write to eliminate all the legacy stuff…but instead they focus on copilot and nagging me not to leave windows edge internet explorer

ksec

8 days ago

My question is why cant M$ ship the old OS running as VM. And free themselves from Backward compatibility on a newer OS.

galaxyLogic

8 days ago

Users will want to use applications that require features of the earlier OS version, and newer ones that require newer features. They don't want to have to switch to using a VM because old apps would only run on that VM.

wongogue

8 days ago

Putting apps from the VM on the primary desktop is something they have already done on WSLg. Launching Linux and X server is all taken care of when you click the app shortcut. Similar to the parent’s ask, WSL2/WSLg is a lightweight VM running Linux.

simonh

8 days ago

In many ways the old API layers are sandboxed much like a VM. The main problems are things like device drivers, software that wants direct access to external interfaces, and software that accesses undocumented APIs or implementation details of Windows. MS goes to huge lengths to keep trash like that still working with tricks like application specific shims.

mike_hearn

8 days ago

Backwards compatibility isn't their biggest problem to begin with, so that wouldn't be worth it. In effect they already did break it: the new Windows APIs (WinRT/UWP) are very different to Win32 but now people target cross platform runtimes like the browser, JVM, Flutter, etc. So it doesn't really matter that they broke backwards compatibility. The new tech isn't competitive.

user

8 days ago

[deleted]

tippytippytango

8 days ago

If 20 years is so ancient, why did they go by so fast....

lproven

8 days ago

Bad news. NT wasn't 20 years ago. It was 31 years ago.

hnfong

8 days ago

It's possible that wine is more "backwards compatible" than the latest version of Windows though.

And while wine doesn't run everything, at least it doesn't circumvent security measures put in place by the OS...

leftyspook

8 days ago

I've had more luck running games from 97-00 under wine than on modern Windows.

flohofwoe

8 days ago

My main impression of Windows is that all the 'old' NT kernel stuff is very solid and still holds up well, that there's a 'middle layer' (Direct3D, DXGI, window system compositing) where there's still solid progress in new Windows versions (although most of those 'good parts' are probably ported over from the Xbox), while most of the top-level user-facing code (UI frameworks, Explorer, etc..) is deterioating at a dramatic pace, which makes all the actual progress that still happens under the hood kinda pointless unfortunately.

apatheticonion

9 days ago

My understanding is that the portion of revenue Microsoft makes from Windows these days is nearly negligible (under 10%). Both XBox and Office individually make more money for Microsoft than Windows, which indicates that they don't have a compelling incentive to improve it technically. This would explain their infatuation with value extraction initiatives like ads in Explorer and Recall.

My understanding is that the main thing keeping Windows relevant is the support for legacy software, so they'd be hesitant to jeopardize that with any bold changes to the kernel or system APIs.

That said. Given my imagined cost of maintaining a kernel plus my small, idiolistic, naive world view; I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel (or if GNU is too restrictive, BSD or alternatively write their own POSIX compliant kernel like MacOS).

Linux would be ideal given its features; containers, support for Android apps without emulation, abundance of supported devices, helpful system capabilities like UNIX sockets (I know they started to made progress there but they abandoned further development), and support for things like ROCm (which only works on Linux right now).

Microsoft could build Windows on top of that POSIX kernel and provide a compatibility layer for NT calls and Win32 APIs. I don't even care if it's open source.

The biggest value for me is development capabilities (my day job is writing a performance sensitive application that needs to run cross-platform and Windows is a constant thorn in my side).

Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

WSL1 was a great start and gave me hope, but is now abandonware.

WSL2 is a joke, if I wanted to run Linux in a VM, I'd run Linux in a VM.

I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant - If I could target unix syscalls during development and produce binaries that worked on Windows, I supposed I'd be happy with that.

delta_p_delta_x

9 days ago

> I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

I don't understand why people keep repeating this wish, rather than the arguably better, more competitive option: open-source the NT and Windows codebase, prepare an 'OpenWindows' (nice pun there, really) release, and simultaneously support enterprise customers with paid support licences, like places like Red Hat currently do.

> Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

I couldn't disagree more. As someone who comes from a mostly-Windows pedigree, UNIX is... pretty backwards, and I look upon any attempt to shoehorn UNIX-on-Windows with a fair bit of disapproval, even if I concede that their individual developers and maintainers have done a decent job. Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf? Tell me when you can get flame graphs in one click).

apatheticonion

9 days ago

That's an interesting idea. Some thoughts come to mind:

- The relatively low revenue of Windows for Microsoft means that they have the potential opportunity of increasing Windows profitability by dropping the engineering costs associated with NT (though on the flipside, they'd acquire the engineering cost of developing Linux).

- Open sourcing NT would likely see a majority of it ported into Linux compatibility layers which would enable competitors (not that this is bad for us as consumers, it's just not good for business)

- Adopting the Linux kernel and writing a closed source NT compatibility layer, init system, and closed source desktop environment means that the "desktop" and Microsoft aspects of the OS could be retained as private IP - which is the part that they could charge for. I know I'd certainly pay for a Linux distribution that has a well made DE.

> UNIX is... pretty backwards,

I honestly agree. Many of the APIs show their age and, in the age of high level languages, it's frustrating to read C docs to understand function signatures/semantics. It's certainly not ergonomic - though that's not to say there isn't room to innovate here.

Ultimately, I value sameness. Aside from ergonomics, NT doesn't offer _more_ than POSIX and language bindings take care of the ergonomics issues with unix, so in many ways I'd argue that NT offers less.

> Visual Studio (not Code) is a massively superior development and debugging too [...] Tell me when you can get flame graphs in one click

Just because the tooling isn't as nice to use now doesn't mean that Microsoft couldn't make it better (and charge for that) if they adopted Linux. This isn't something entirely contingent on the kernel.

delta_p_delta_x

9 days ago

I don't see why everything has to be Linux (which I will continue to maintain has neither the better kernel- nor user-mode).

Windows and NT have their own strengths as detailed in the very article that this thread links to. When open-sourced they could develop entirely independently, and it is good to have reasonable competition. Porting NT and the Windows shell to the Linux kernel for porting's sake could easily take years, which is wasted time and effort on satisfying someone's not-invented-here syndrome. It will mean throwing away 30+ years of hardware and software backward compatibility just to satisfy an imperfect and impractical ideal.

For perspective: something like WINE still can't run many Office programs. The vast majority of its development in recent years has been focused on getting video games to work by porting Direct3D to Vulkan (which is comparatively straightforward because most GPUs have only a single device API that both graphics APIs expose, and also given the fact that both D3D and Vulkan shader code compile to SPIR-V). Office programs are the bread and butter of Windows users. The OpenOffice equivalents are barely shadows of MS Office. To be sure, they're admirable efforts, but that only gets the developers pats on the back.

EvanAnderson

9 days ago

I have a fever dream vision of a "distribution" of an open source NT running in text mode with a resurrected Interix. Service Control Manager instead of systemd, NTFS (with ACLs and compression and encryption!), the registry, compatibility with scads of hardware drivers. It would be so much fun!

ruthmarx

8 days ago

Isn't ReactOS close enough?

EvanAnderson

8 days ago

I've kept meaning to look at ReactOS and put it off again and again. I felt Windows Server 2003 was "peak Windows" before Windows 7 so I'd imagine I'd probably like ReactOS.

cherryteastain

9 days ago

> open-source the NT and Windows codebase

May be very difficult or impossible if the Windows codebase has third-party IP (e.g. for hardware compatibility) with restrictive licensing

jen20

9 days ago

Sun managed it with Solaris (before Oracle undid that work) - indeed they had to create a license which didn't cause problems with the third party components (the CDDL).

p_l

8 days ago

The license happened less about third party components (GPLv2 would have worked for that, too, even if it's less understood area), but because GPLv3 was late, Sun wanted patent clause in license, and AFAIK engineers rebelled against licensing that would have prevented BSDs (or other) from using the code.

(For those who still believe "CDDL was designed to be incompatible with GPL", the same issues show up when trying to mix GPLv2 and GPLv3 code if you can't relicense the former to v3)

dajtxx

8 days ago

I can imagine the effort of open source Windows would be prohibitive.

Having to go through every source file to ensure there is nothing to cause offense in there; there may be licensed things they'd have to remove; optionally make it buildable outside of their own environment...

Or there may be just plain embarrassing code in there they don't feel the need to let outsiders see, and they don't want to spend the time to check. But you can be sure a very small group of nerds will be waiting to go through it and shout about some crappy thing they found.

dijit

8 days ago

I'd venture that even more nerds would go through it and fix their specific problems.

It's always been quite clear that FOSS projects that have sufficient traction are the pinnicle of getting something polished. No matter how architecturally flawed or no matter how bad the design is: many eyes seem to make light work of all edge cases over time.

On the other hand, FOSS projects tend to lack the might of a large business to hit a particular business case or criticality, at least in the short term.

Open sourcing is probably impossible for the same reasons open sourcing Solaris was really difficult. The issues that were affecting solaris affect Windows at least two orders of magnitude harder.

It's the smart play, though they'd lose huge revenues from Servers that are locked in... but otherwise, Windows is a dying operating system, it's not the captive audience it once was as many people are moving to web-apps, games are slowly leaving the platform and it's hanging on mostly due to inertia. The user hostile moves are not helping to slow the decline either.

jen20

9 days ago

> Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf? Tell me when you can get flame graphs in one click).

Hard disagree on the development aspect of VS, which (last time I used it, in 2015) couldn't even keep up with my fairly slow typing speed.

The debugging tools are excellent, but they are certainly not any more excellent than those in Instruments on macOS (which is largely backed by DTrace).

pathartl

9 days ago

VS2022 is actually pretty damn slick. I use it on the daily and it's much more stable than any previous version. It's still not as fast as a text editor (I _do_ miss Sublime's efficiency), but even going back to 2019 is extremely hard.

Timwi

8 days ago

2015 is 9 years ago. We shouldn't keep comparing Windows/Microsoft software from that long ago with modern alternatives on Linux and Mac.

That said, I agree that Visual Studio was extremely slow and clunky in the first half of the 2010s.

jen20

8 days ago

I didn’t compare it with a modern alternative. I compared its debugging tools of Instruments of the same vintage, and pointed it out that last time I tried VS it couldn’t keep up with basic typing.

Dalewyn

8 days ago

NT 10.0 hails from 2015 (Windows 10) and was re-released in 2021 (Windows 11).

juunpp

9 days ago

> Visual Studio (not Code) is a massively superior development and debugging tool to anything that the Unix crowd have cooked up (gdb? perf?)

VS is dogshit full of bloat and a UI that takes a PhD to navigate. CLion and QTCreator embed gdb/lldb and do the debugging just fine. perf also gets you more system metrics than Visual Studio does; the click vs CLI workflow is mostly just workflow preference. But if you're going to do a UI, at least don't do it the way VS does.

SideQuark

8 days ago

20+B annually is not “nearly negligible.” That’s more revenue than all but 3 other software companies: oracle 46B, SAP $33B, and salesforce 430B. It’s more annual revenue than Adobe and every other software company.

ruthmarx

8 days ago

> I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

Oh hell no!

Diversity in operating systems is important, and the NT architecture has several advantages over the Linux approach. I definitely don't want just one kernel reigning supreme, not yet at least - although that is probably inevitable.

EvanAnderson

9 days ago

> I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant...

Microsoft had this and abandoned it. I was building GNU software on NT in 2000 under Interix. It became Services for Unix and then was finally abandoned.

lproven

8 days ago

> was finally abandoned.

Was finally _replaced_ by WSL.

pxc

8 days ago

By WSL1. But WSL2 is a VM running a Linux kernel, not POSIX compatibility for Windows.

There's still all kinds of pain and werodness surrounding the filesystem boundary with WSL2. And contemporary Windows still has lots of inconsistency when trying to use Unix-style paths (which sometimes work natively and sometimes don't), and Unix-y Windows apps are still either really slow or full of hacks to get semi-decent performance. Often that's about Unix or Linux expectations like stat or fork, but sometimes other stuff (see for instance, Scoop's shim executable system that it uses to get around Windows taking ages to launch programs when PATH is long).

WSL2 also just isn't a real substitute for many applications. For instance, I'd like to be able to use Nix to easily build and install native Windows software via native toolchains on Windows machines at work. You can't do such a thing with WSL2. For that you need someone who actually knows Windows to do a Windows port, and by all reports that is very different from doing a port to a Unix operating system.

Idk if what people are asking for when they say 'POSIX compliant' with respect to Windows really has much to do with the POSIX standard (and frankly I don't think that matters). But they're definitely asking for something intelligible and real that Windows absolutely lacks.

EvanAnderson

8 days ago

> But they're definitely asking for something intelligible and real that Windows absolutely lacks.

Interix was what Windows lacks, but it was abandoned. It wasn't a Linux compatibility layer like WSL1 (or just a gussied-up Linux VM like WSL2). It was a freestanding implementation of POSIX and building portable software for it was not unlike building software portable to various *nixes. GNU autotools had a target for it. I built software from source (including upgrading the GCC it shipped with).

It was much more elegant than WSL and was built in the spirit of the architecture of NT.

RiverCrochet

8 days ago

IIRC Interix was a separate "subsystem" in the Windows API model - psxss.exe, with Win32 covered by csrss.exe and believe it or not there was an OS2 one.

pxc

8 days ago

What does it say about the practical usefulness of this Windows facility that MS has, it seems, never maintained one of these 'personalities' long-term?

RiverCrochet

8 days ago

There was a lot in the air in the early 90's when Windows NT was born - it wasn't a given that Windows, Intel, or heck even TCP/IP were going to be the tech mainstays they are today. So the whole "subsystem" thing is part of some seriously good long term strategic planning, though you know it was defintely to have one foot out of the door if their partnership with IBM went south, which it did.

EvanAnderson

8 days ago

I think it's probably business case and revenue potential, not practical usefulness. I felt like Interix was plenty useful but probably couldn't earn its keep. I think that pluggable personalities even exists in NT speaks to the general Microsoft embrace / extend / extinguish methodology. They were a means to an end to win contracts.

lproven

7 days ago

> What does it say about the practical usefulness of this Windows facility that MS has, it seems, never maintained one of these 'personalities' long-term?

I am no defender of MICROS~1 but I think this is a misrepresentation.

1. Win32 is an NT personality and it is still actively maintained after 31 years.

2. Win16 ran on NTVDM which arguably is tantamount to a personality, and that is still present and works in Windows 10 32-bit today.

3. Downvotes or not, I stand by my point: the original POSIX personality became Windows Services for Unix, which went through 4 releases: 1.0, 2.0, 3.0, and 3.5.

https://en.wikipedia.org/wiki/Windows_Services_for_UNIX

V2 was effectively replaced by Interix.

But WSU was effectively a proprietary x86-32 Unix. Those are all dead and gone now, Xinuos notwithstanding, and having such a tool is no use in C21.

So, it was axed 20Y ago. 12Y later it was replaced by WSL.

WSL 1 was replaced by WSL2, and I mourn its lost potential. I feel WSL1 should have become a proper NT personality, which would have resulted in some improvements to Windows' capabilities.

pxc

7 days ago

My question was genuine, not just rhetorical. I appreciate the additional context here, especially that Win32 is implemented as a long-lived NT personality. It's indeed a bummer that Microsoft didn't see it as expedient to maintain the others or continue to grow WSL1.

Windows Services for Unix was also longer-lived than I'd realized. Was that just before its time or did it have some other problem?

therein

9 days ago

> Cygwin, msys2/git-bash are all fantastic but they are no replacement for the kind of development experience you get on Linux & MacOS.

> WSL1 was a great start and gave me hope, but is now abandonware.

Exactly my thoughts. I really admired the design and how far WSL1 got. It is just sad to see it abandoned.

> WSL2 is a joke, if I wanted to run Linux in a VM, I'd run Linux in a VM.

I couldn't have said it better. If I wanted to run Linux in a VM, I'd run Linux in a VM, why are we pretending something special is going on.

wkat4242

9 days ago

Yes WSL1 was really special. Talking to the NT kernel from a Linux environment.

I think it was mainly the docker crowd that kept asking for compatibility there :(

pxc

8 days ago

Microsoft could have implementes the Docker API as part of WSL1 instead of loading up a real Linux kernel for it. That's how LX Zones on Illumos work for running Docker containers on non-Linux without hardware virtualization.

I'm sure it's tricky and hard (just like WSL1 and WINE are tricky and hard), but we know it's at least possible because that kind of thing has been done.

michalf6

8 days ago

>Exactly my thoughts. I really admired the design and how far WSL1 got. It is just sad to see it abandoned.

But why was it better, other than aesthetic preference?

7bit

8 days ago

> My understanding is that the main thing keeping Windows relevant is the support for legacy software, so they'd be hesitant to jeopardize that with any bold changes to the kernel or system APIs.

Without Windows, there would be no platform to sell office (macOS aside). That as a side note.

The important piece you are missing is this: The entirety of Azure runs on an optimized Variant of Hyper-V, hence all of Azure runs on Windows. That is SUBSTANTIAL!

PeterStuer

8 days ago

"I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant"

Their was a time MS sales posted NT to be more POSIX compatible than UNIXes.

lproven

8 days ago

> I'd love if it Microsoft simply abandoned NT and threw their weight behind the Linux kernel

This entire article is an elegant argument why this would be a terrible idea. Didn't you RTFA?

> I guess a less extreme option would be for Microsoft to extend NT to be POSIX compliant

Way to expose and highlight your ignorance. :-( NT 3.1, the first release, was POSIX compliant in 1993, and every release since has been.

486sx33

9 days ago

Pulling WSL from windows 10 was particularly nasty

shiroiushi

8 days ago

>threw their weight behind the Linux kernel (or if GNU is too restrictive

The GPL isn't too restrictive. Google has no issue with it on Android (which uses a modified Linux kernel). GPL doesn't mean you have to open-source everything, just the GPL components, which in the case of the Linux kernel, is just the kernel itself. MS already contributes a bunch of drivers (for their hypervisor) to the Linux kernel. They could easily make a Linux-based OS with their own proprietary crap on top if they wanted to.

>support for Android apps without emulation

They wouldn't need CPU-level emulation, but the API would need some kind of compatibility layer, similar to how WINE serves this purpose for Windows applications on Linux.

>Microsoft could build Windows on top of that POSIX kernel and provide a compatibility layer for NT calls and Win32 APIs.

They don't need to: they can just use WINE. They could improve that, or maybe fork it and add some proprietary parts like CodeWeavers does, or they could even just buy out CodeWeavers.

PeterStuer

8 days ago

In Microsoft's perfect world your local machines would just be lightweight terminals to their Azure mainframe.

pjmlp

8 days ago

Catching up with Google's Chromebook, so worshiped around here.

alt227

8 days ago

This is coming. IMO in 20 years this will be how all devices, including phones, work.

alternatex

8 days ago

Without massive (exponential) battery/efficiency improvements it won't happen. Networking isn't something you can magically wave away. It has a cost.

cesarb

8 days ago

You'd need massive networking improvements too. Telling someone "try next to the stairs, the cellular signal's better there" is an example I saw yesterday (it was a basement level), and that's not uncommon in my experience. You have both obstacles (underground levels, tunnels, urban canyons, extra thick walls, underwater) and distance (large expanses with no signal in the middle of nowhere); satellites help with the later but not with the former. Local computing with no network dependencies works everywhere, as long as you have power.

alt227

8 days ago

Yes but by removing pretty much all processing on the end device and making it a thin client, you can extend battery life exponentially.

pxc

8 days ago

Is it actually the case that local computation on mobile devices is much more expensive than running the radios? I was just the impression that peripherals like the speakers, real radios, and display often burn up much more power than local manipulation of bits.

alt227

5 days ago

You are definitely correct ni that the screen takes a bit chunk of the power, but it is my understanding that the cpu is taking the most. This is why you cannot run x86 systems on battery power very efficiently.

Look at it this way. Older laptops on x86 have the same screens as the newer arm based laptops, but the arm laptops have significantly more battery life using the same battery tech. This is definitely a sign that the processor is the biggest user of power in the system.

phendrenad2

9 days ago

> an aging design

What does that mean, exactly? Linux is also an "aging design", unless I missed a big announcement where they redesigned it at any point since 1994.

runjake

9 days ago

That was in response to the beginning of the article:

"I’ve repeatedly heard that Windows NT is a very advanced operating system"

It's very advanced for decades ago. It's not meant as an insult.

About 20 years ago, despite being a Linux/UNIX/BSD diehard, I went through the entire Inside Windows NT book word by word and took up low-level NT programming and gained a deep respect for it and Dave Cutler. Also a h/t to Russinovich who despite having better things to do running Winternals Software[1], would always patiently answer all my questions.

1. https://en.wikipedia.org/wiki/Sysinternals

user

8 days ago

[deleted]

kbolino

9 days ago

Linux actually did have some pretty significant redesigns with some notable breaking changes. It wasn't until the 2.4 line ended in the late oughts that Linux as we know it today came fully into existence.

yencabulator

8 days ago

Linux 2.6 internals were very different from 2.4 internals which were hugely different from 2.2. Programming for the three was almost like targeting 3 different kernels.

dredmorbius

8 days ago

What were some of those changes / developments?

kbolino

6 days ago

Some major, visible components to be added in 2.6 over 2.4 were ALSA, LVM2, and udev, all of which remain in modern kernels. The 2.4 kernel series also had a lot of differences over 2.2 but many of them were "half-baked" compared to where they ended up in 2.6 (e.g., input subsystem, iptables).

Also, module loading was changed significantly; there's some documentation at https://tldp.org/HOWTO/Module-HOWTO/linuxversions.html

The evolution starting from 2.6 has been much more gradual, especially at the interface between the kernel and other code. The version numbers are not nearly as significant as they used to be. There were more fundamental changes between 2.4 and 2.6 than between 3.0 and 4.0. Instead of discrete leaps, the kernel now changes through continuous, small increments, and such a pace was made possible by the current (2.6+) kernel architecture.

ofrzeta

9 days ago

FWIW Linux got support for kernel modules in January 1995.

IgorPartola

8 days ago

Yeah this was one thing I spotted as well. The author seems to confuse the fact that the norm for Unix/Linux is that the OS should have the drivers whereas MS assumes the manufacturer should provide it with the capability to have this.

It also entirely overlooks how the system that allows a user with know specialized knowledge to authorize random code to run in a privileged environment which led to vulnerabilities that had their own vulnerabilities.

HPsquared

9 days ago

cf Terry Davis saying "Linux wants to be a 1970s mainframe".

jonathaneunice

8 days ago

Every new system wants to be a mainframe when it grows up. VMS, Unix, Linux, NT...they all started "small" and gradually added the capabilities and approaches of the Bigger Iron that came before them.

Call that the mainframe--though it too has been evolving all along and is a much more moving target than the caricatures suggest. Clustering, partitions, cryptographic offload, new Web and Linux and data analytics execution environments, most recently data streaming and AI--many new use modes have been added since the 60s and 70s inception.

Someone

8 days ago

> Every new system wants to be a mainframe when it grows up. VMS, Unix, Linux, NT...they all started "small" and gradually added the capabilities and approaches of the Bigger Iron that came before them

MacOS started on desktop, moved from there to smartphones and from there to smartwatches. Linux also moved ‘down’ quite a bit. NT has an embedded variant, too (https://betawiki.net/wiki/Windows_NT_Embedded_4.0, https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP..., https://en.wikipedia.org/wiki/Windows_IoT).

jonathaneunice

8 days ago

True. Every new system wants to be just about everything when it grows up. Run workstations, process transactions, power factors, drive IoT, analyze data, run AI...

"Down" however is historically a harder direction for a design center to move. Easier to add features--even very large, systemic features like SMP, clustering, and channelized I/O--than to excise, condense, remove, and optimize. Linux and iOS have been more successful than most at "run smaller, run lighter, fit into a smaller shell." Then again, they also have very specific targets and billions of dollars of investment in doing so, not just hopeful aspirations and side-gigs.

beeflet

8 days ago

TD had some interesting ideas when it came to simplifying the system, but I think the average person wants something inbetween a mainframe and a microcomputer.

In linux/unix there is too much focus on the "multiuser" and "timesharing" aspect of the system, when in the modern day you generally have one user with a ton of daemons so you forced to run daemons as their own users and then have some sort of init system to wrangle them all. A lot of the unixisms are not as elegant as they should be (see plan9, gobolinux, etc).

TempleOS is more like a commodore 64 environment than an OS: there's not really any sort of timesharing going on and the threading is managed manually by userspace programs. One thing I like is that the shell language is the same as the general programming language (HolyC).

anthk

8 days ago

Every modern OS wants to be that, even iOS, at least internally.

fortran77

9 days ago

> The NT kernel is pretty nifty, albeit an aging design.

Unix is an Apollo-era technology! Also an aging design.

UniverseHacker

9 days ago

Except unix nowadays is just a set of concepts and conventions incorporated into modern OSs

jiggawatts

9 days ago

How “modern” are they when they’re just a bunch of shell scripts on top of POSIX? SystemD caught up to NT4 and the original MacOS.

xattt

9 days ago

> SystemD caught up to NT4 and the original MacOS.

The transition happened to the huffing and puffing/kicking and screaming of many sysadmins.

ruthmarx

8 days ago

Still a minority of sysadmins though. Most seem to have embraced it to an extent that's honestly a little sad to see. I liked to think of the linux community as generally being a more technical community, and that was true for a long time when you needed more grit to get everything running, but nowadays many just want Linux to be 'free windows'.

RiverCrochet

8 days ago

> nowadays many just want Linux to be 'free windows'

This means Linux has "made it."

> I liked to think of the linux community as generally being a more technical community, and that was true for a long time when you needed more grit to get everything running

I guess that grit was a gateway to a basic Linux experience for a long time - it did take a lot of effort to get a normal desktop running in the early to mid 90's. But that was never going to last - technical people tend to solve problems and open source means they're going to be available to anyone. There are new frontiers to apply the grit.

ElectricalUnion

9 days ago

If by "modern" you mean stuff between 1930 and 1970, sure, most contemporany OSes can trace roots from that era.

leftyspook

8 days ago

Set of concepts derived from "whatever the hell Ken Thompson had in his environment circa 1972".

phendrenad2

9 days ago

What percent of Unix users are using a "modern OS" and what percentage are using Linux, which hasn't significantly changed since it was released in 1994?

UniverseHacker

9 days ago

My point was that most people are using things like Linux, MacOS, etc. nowadays, which are all also pretty old by now but not nearly as old as ATT Unix

SoothingSorbet

8 days ago

Linux has changed dramatically since its first release. It has major parts rewritten every decade or so, even. It just doesn't break its ABI with userspace.

phendrenad2

8 days ago

Of course, I meant the design hasn't changed. Linux has had a lot of refactoring, and probably Windows has also.

SSLy

9 days ago

let's be charitable, removal of global lock was fairly big change.

fasa99

4 days ago

The "aging design" arguments holds water like a sieve. Electricity and engines are 1800s vintage designs The wheel is a prehistorical aging design american government is an aging design

The quality of an idea is independent of the time of its conception.

The utility of an idea is dependent on the time and place where it may be used however.

pjmlp

8 days ago

That was the whole point of WinRT and UWP, and the large majority of Windows developer community rebeled against it, unfortunely.

Didn't help that trying to cater to those developers, management kept rebooting the whole development approach, making adoption even worse.

marcodiego

9 days ago

I created a file named aux.docx on a pendrive with Linux. Tried to open it on windows 7. It crashed word with a strange error. Don't know what would happen on 8+.

uncanneyvalley

9 days ago

It would fail, too. ‘CON’ has been a reserved name since the days of DOS (actually CP/M, though that doesn’t have direct lineage to Windows) where it acted as a device name for the console. You can still use it that way. In a CMD window:

`type CON > file.txt`, then type some stuff and press CTRL+Z.

https://learn.microsoft.com/en-us/windows/win32/fileio/namin...

nullindividual

9 days ago

This is a Win32-ism rather than an NT-ism. This will work:

    mkdir \\.\C:\COM1

ninetyninenine

8 days ago

Didn't the article say that unix's were even more full of cruft?

ruthmarx

8 days ago

Did it?

ninetyninenine

8 days ago

It did. The whole article was about how the NT kernel is better designed.

ruthmarx

8 days ago

Seems you answered your own question then :)

ninetyninenine

8 days ago

Question is more implied than literal. Why are you commenting on cruft in NT when the article is all about Unix cruft? Why no mention about the contrast? No snark. Honest question.

giancarlostoro

8 days ago

I'd rather we get rid of all the marketing crap first.

pasc1878

8 days ago

Does any common OS have a modern design.

Unix is around the same age.

Andrex

8 days ago

Imagine the alternate reality where we got Longhorn with WinFS instead of Vista.

johnflan

6 days ago

Certainly would have helped in the era of embedding AI into the operating system.

user

8 days ago

[deleted]

anthk

9 days ago

Win32 is not the issue, MS could just create shims for these in a secure way. It's 2024, not 1997.

Ditto with GNU/Linux and old SVGAlib games. It already should have been some a wrapper against SDL2.

runjake

9 days ago

I'm not sure what you're trying to say here, but those "shims" exist. Apps generally do not talk directly to the Executive (kernel). Instead, the OS has protected subsystems that publish APIs.

Apps talk to the subsystem, and the subsystem talks to the Executive (kernel).

Traditionally, Windows apps talk to the Win32 subsystem[1]. This subsystem, as currently designed, is an issue as described in my previous comment.

1. https://en.wikipedia.org/wiki/Architecture_of_Windows_NT#Win...

Caveat: Details of this may have changed in the last couple major Windows versions. I've been out of the NT game for a bit. Someone correct me, if so.

anthk

8 days ago

Yes, I will correct you. Direct Draw games with run dog slow under Windows 8 and up. You can run them at full speed with WineD3D as it has libraries to map both GDI and DDraw to OpenGL.

nox101

9 days ago

Seems to me they should pull an Apple. Run everything old in some "rosetta" like system and then make something 100% new and try to get people to switch, like say no new updates except security updates for the old system so that apps are incentivized to use the new one.

wvenable

9 days ago

Nobody wants something 100% new. Users don't want it. Developers don't want it. You can make a new OS but then you'll have zero developers and zero users.

Yet this fantasy exists.

And as soon as you make something new, it'll be old, and people will call for its replacement.

nox101

6 days ago

Some users clearly wanted it because Apple did it and are doing just fine.

Users and business want to stop being hacked. Windows will never achieve this without starting over.

wvenable

6 days ago

Apple did nothing of the sort. NeXTStep was already old and established, then they had to add a whole classic compatibility API layer because developers balked and they had to change the UI to make it more classic Mac OS like. They only bought NeXT because they failed at building their own next-generation replacement for Mac OS at the time Microsoft succeeded with Windows NT.

> Users and business want to stop being hacked. Windows will never achieve this without starting over.

That's not true at all.

nox101

6 days ago

So what if Apple bought NextStep? They still switched all their uses over from OS-9 to completely different OS-X and all the software migrated or was left behind

wvenable

6 days ago

They had to spend an additional year of development adding the Carbon API so OS-9 apps could be ported. They had to alter the UI. They picked an already established operating system.

You're replying to a post where I said users and developers don't want something 100% new. You said "Apple did it" but they didn't. There was nothing 100% new about OS X. They had to do everything possible to make it as not-new as it could be while being a completely different OS.

OS X proves my point.

nox101

5 days ago

Apple moved everyone to a "new to the user" OS. Sorry if that wasn't clear. It's irrelevant that OS-X was based on some other OS. To every user of OS-9 it was a new OS for them. Effectively Apple got all of their users to switch OSes. Microsoft should do the same.

wvenable

4 days ago

What 10+ year old existing commerical OS should Microsoft move their users to?

You aren't really addressing the point. To users, OS X was the next version of Mac OS. Apple took NeXTStep and added a pile of classic MacOS UI and added APIs so developers could easily port their classic applications. It wasn't 100% new even to the users. Familiar UI. Same apps.

If Apple has just thrown BeOS on their machines and it turned out be a success then there might be some argument for users and developers loving a 100% new OS. But, as it turns out, even an OS as advanced as BeOS is not what users and developers really want. Users want to be able to run their same applications in familiar environment and developers want to make use of their existing code and existing skills.

Another way to look at it is Apple moved their users to their own "Windows NT" -- something that Microsoft also did around the same time frame with Windows XP.

wannacboatmovie

8 days ago

People live under the delusion that OS X was "100% new" when in fact it was warmed-over NeXTSTEP from 1989. Most of them probably have never seen or heard of a NeXT workstation.

To reinforce how much people hate "100% new" how long has Microsoft been working on ReFS? 20 years? The safest most boring job in the world must be NTFS developer.

wvenable

8 days ago

The OS X story is even worse than that. When Apple first released OS X to developers, like Adobe, they balked. They weren't going to port their Mac applications to this "new" operating system. Apple had to take another year to develop the Carbon APIs so Mac developers could more easily port their apps over.

wannacboatmovie

9 days ago

You left out the important part: abandon the Rosetta-like system a mere few years later once you've lured them in, then fuck everyone over by breaking backwards compatibility every OS release. Apple really has the "extinguish" part nailed down.

teleforce

9 days ago

>As a result, NT did support both Internet protocols and the traditional LAN protocols used in pre-existing Microsoft environments, which put it ahead of Unix in corporate environments.

Fun facts, to accelerate networking and Internet capability of Windows NT due to the complexity of coding a compatible TCP/IP implementation, the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.

They cannot do that with Linux since it's GPL licensed codes, and this probably when the FUD attack started for Linux by Microsoft and the culmination of that is "Linux is cancer" statement by the ex-CEO Ballmer.

The irony is that Linux now is the most used OS in Azure cloud platform (where Butler was the chief designer) not BSDs [2].

[1] History of the Berkeley Software Distribution:

https://en.wikipedia.org/wiki/History_of_the_Berkeley_Softwa...

[2] Microsoft: Linux Is the Top Operating System on Azure Today:

https://news.ycombinator.com/item?id=41037380

nullindividual

8 days ago

Not quite.

NT 3.1 used a TCP stack from Spider Systems[0]. It was completely rewritten by Microsoft in 3.5[1].

The userland utilities, ping, tracert, etc were brought over from BSD.

[0] https://en.wikipedia.org/wiki/Spider_Systems

[1] https://web.archive.org/web/20151229084950/http://www.kuro5h...

teleforce

8 days ago

Are you pretty sure on that? These are some quotes among others on the topic in a forum discussions back in 2005:

"While Microsoft did technically 'buy' their TCP/IP stack from Spider Systems, they did not "own" it. Spider used the code available and written for BSD, so it doesn't appear that Microsoft directly copied BSD code (which, again, it is perfectly legal and legitimate to copy), they got it from a third party. Also ftp, rcp and rsh seems to have come with the bundle. I have heard that ftp was, but have never used rcp and rsh on Windows, so don't know what version(s) those were or were not included in any particular Windows version. Anyone can look through the .exes for those files and look for "The Regents of the University of California" copyright notice, if they want to see for themselves (rather than take the word of some anonymous geeks on a forum) ;)"

[1] Windows TCPIP Stack Based on Unix ?

https://www.neowin.net/forum/topic/381190-windows-tcpip-stac...

nullindividual

8 days ago

Like I said, the userland applications (ping, tracert, etc.) were ports from BSD, probably nearly 1:1 copies.

The TCP & IP stacks were written by Microsoft in NT 3.5.

What Spider Software used (again, used by Microsoft in NT 3.1 due to time pressures) may have originated from BSD, but we don't know.

You can browse the NT4 source code TCP/IP stack. Just search GitHub.

teleforce

8 days ago

>What Spider Software used (again, used by Microsoft in NT 3.1 due to time pressures) may have originated from BSD

In all likelihood it's from BSD don't you think?

nullindividual

8 days ago

That's an assumption that neither of us are qualified to make.

BSD wasn't the only TCP/IP stack on the market[0].

[0] https://en.wikipedia.org/wiki/Internet_protocol_suite#Adopti...

teleforce

8 days ago

If Spider engineers didn't even bothered to change any of the BSD userspace utilities what is the chance that they built the entire TCP/IP stack perfectly compatible with the outside world inside the kernel from scratch?

nullindividual

7 days ago

The Spider engineers weren't the ones to port BSD utilities. That was Microsoft.

teleforce

7 days ago

Perhaps you misunderstood my statement I am not talking about NT there, Spider Systems has had their network user spaces utilities and tools but almost all of them were BSD not developed by them. If their tools originally BSD then what were the chances they developed their very own TCP/IP stacks from scratch with good compatibility with the outside world?

This remind me a particular incident happened to Brendan Gregg of the eBPF initiatives, when a company performed a demo of their alleged game changing kernel tracing tools to him and as it turned out the tools were actually Brendan very own developed tools.

netbsdusers

8 days ago

> the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.

They didn't, but I don't know why you're putting a sinister spin on this either way. Of course the licence allows for that. It was the express intention of the authors of the BSD networking stack that it be used far and wide.

They went to considerable effort to extract the networking code into an independent distribution (BSD Net/1) which could be released under the BSD licence independently of the rest of the code (parts of which were encumbered at the time). They wanted it to be used.

teleforce

8 days ago

>They didn't

Didn't they?

I am not questioning the fact that BSD is a commercial and Microsoft 'friendly' license, and Microsoft and Unix vendors hired many of BSD developers, it's a win-win situation for them.

What is so sinister by saying Microsoft at the time didn't particularly like Linux GPL license and their ex-CEO called it a cancer (their words not mine) since it's not compatible with their commercial endeavours at that particular time. Perhaps you have forgotten or too young to remember the hostility of Microsoft towards Linux with their well documented FUD initiatives [1].

[1] Halloween documents:

https://en.wikipedia.org/wiki/Halloween_documents

nullindividual

8 days ago

Linux wasn't a thing by the time NT was in development. Hell, the GPL itself wasn't a thing when NT began development!

teleforce

8 days ago

Are you pretty sure about that?

Linux 0.99 was released to the world with TCP/IP stack back in Sept 1992 whereas at the time Windows NT was still in the development since 3.1 version with the adopted Spider TCP/IP stack was only released in July 1993 [1],[2].

There's a time window (pun intended) between the public released of the Linux kernel with working TCP/IP and the official public released of Windows NT. If Microsoft wanted to adopt Linux TCP/IP (I doubt they wanted to do that since it's still at the time pre stable 1.0) theoretically they can because it will be just copy paste exercise as they most probably did with Spider TCP/IP stack. But what I am saying is that even if they wanted to it will be illegal because by the stroke of Linus' genius Linux kernel was relicensed to GPL in the exact same year 0.99 kernel was released.

[1] Linux kernel version history:

https://en.wikipedia.org/wiki/Linux_kernel_version_history

[2] Windows NT:

https://en.wikipedia.org/wiki/Windows_NT

nullindividual

8 days ago

> Are you pretty sure about that?

NT OS/2 started development in 1988 and NT Win32 in 1990.

Build 297 of NT 3.1 contains a TCP/IP stack dated June 25th, 1992. This is not the earliest public build.

teleforce

7 days ago

Are you pretty sure about that?

I've checked but there is no references that mentioned of Build 297 of NT 3.1 released in 1992 come with any TCP/IP stack.

It seems TCP/IP stack was still under development at that time and was not included in this early beta build, and it was only released in 1993 stable release. Perhaps the more correct word is not 'development' but 'testing' since they ripped the TCP/IP stack out from Spider system and there's a very high probability that the Spider stack itself was part of BSD codes.

nullindividual

7 days ago

They licensed Spider Software's TCP/IP stack. And you have no knowledge that Spider leveraged BSD code. You're probably too young to remember, but there were many vendors making TCP/IP stacks back in those days. Some based on pre-existing stacks, others from the ground up.

https://ia904500.us.archive.org/view_archive.php?archive=/23...

TCPIP.SYS is the TCP/IP stack in Windows.

I'm unsure as to why you're making unsubstantiated claims and sticking with them. You have zero proof that the claims you're making are true. I'd love to have the source for Spider's TCP or at least have a contact that worked on the stack to ask, it's interesting history. I don't care one way or another, it was a very temporary stack that didn't make it into 3.5 where the entire stack architecture changed (no longer based on SysV STREAMS).

teleforce

7 days ago

> Linux wasn't a thing by the time NT was in development

So your statement provided here is wrong and misleading then?

According to this book NT development took five years to complete and reached stable 1.0 version by 1993 [1].

When Linux was in version 0.99 it has had a working implementation of TCP/IP by 1992 then in the very same year Linus relicensed Linux to GPL, and all these happened during NT development, wasn't it?

[1] Showstopper: The Breakneck Race to Create Windows NT and Next Generation at Microsoft:

https://archive.org/details/showstopperbreak00zach

Ericson2314

8 days ago

I am 1/3 of the way through this, and it doesn't I am afraid seem like a very good breakdown.

For example.

- Portability. Who cares, even in the 1990s, NetBSD is a thing. We've known learned that portability across conventional-ish hardware doesn't actually affect OS design in vary

Overall, the author is taking jargon / marketing terms too much at face value. In many cases, Windows and Unix will use different terms to describe the same things, or we have terms like "object-oriented kernel" that, by default, I assume don't actually mean anything: "object oriented" was too popular and adjective in the 1990s to be trusted.

- "NT doesn’t have signals in the traditional Unix sense. What it does have, however, are alerts, and these can be kernel mode and user mode."

  Topic sentence should not start with the names of things. It is extraneous.
My prior understanding is that NT is actually better in many ways, but I don't feel like I am any closer to learning why.

pavlov

8 days ago

> ‘we have terms like "object-oriented kernel" that, by default, I assume don't actually mean anything’

The author did explain what it specifically means in the context of the NT kernel: centralized access control, common identity, and unified event handling.