BradleyChatha
3 months ago
In short: I wanted to talk a bit about ASN.1, a bit about D, and a bit about the compiler itself, but couldn't think of any real cohesive format.
So I threw a bunch of semi-related ramblings together and I'm daring to call it a blog post.
Sorry in advance since I will admit it's not the greatest quality, but it's really not easy to talk about so much with such brevity (especially since I've already forgot a ton of stuff I wanted to talk about more deeply :( )
mananaysiempre
3 months ago
A small nitpick: I don’t think your intersection example does what you want it to do. Perhaps there’s some obscure difference in “PER-visibility” or whatnot, but at least set-theoretically,
LegacyFlags2 ::= INTEGER (0 | 2 ^ 4..8) -- as in the article
is exactly equivalent to LegacyFlags2 ::= INTEGER (0) -- only a single value allowed
as (using standard mathematical notation and making precedence explicit) {0} ∪ ({2} ∩ {4,5,6,7,8}) = {0} ∪ ∅ = {0}.giancarlostoro
3 months ago
At least you might be summoning Walter Bright in talking about D. One of my favorite languages I wish more companies would use. Unfortunately for its own sake, Go and Rust are way more popular in the industry.
pjmlp
3 months ago
Unfortunately it lost the opportunity back when Remedy Games and Facebook were betting on it.
The various WIP features, and switching focus of what might bring more people into the ecosystem, have given away to other languages.
Even C#, Java and C++ have gotten many of features that were only available in D as Andrei Alexandrescu's book came out in 2011.
Clouudy
3 months ago
I wouldn't say that it's unable to make a comeback, there is still a valid use case from my experience with it. The syntax, mixed-memory model, UFCS, and compilation speed are nice quality of life features compared to C++, and it's still a native binary compared to C# and Java. So if you're starting out with a new project from scratch there's not much reason not to beyond documentation reasons. And you can interface pretty easily to C/C++ as well as pretty much any other language designed for that sort of thing, but without a lot of syntax changes like Carbon.
I imagine that the scope of its uses has shrunk as other languages caught up, and I don't think it's necessarily a good language for general enterprise stuff (unless you're dealing with C++), but for new projects it's still valid IMO. I think that the biggest field it could be used in is probably games too, especially if you're already writing a new engine from scratch. You could start with the GC and then ease off of it as the project develops in order to speed up development, for example. And D could always add newer features again too, tbh.
pjmlp
3 months ago
You always have to compare ecosystems, not programming languages syntax on their own.
Another thing that Java and C# got to do since 2011, is that AOT is also part of the ecosystem and free of charge (Java has had commercial compilers for a while), so not even a native binary is an advantage as you imagine.
First D has to finish what is already there in features that are almost done but not quite.
Clouudy
3 months ago
TBH I don't necessarily think that ecosystem is what matters in every application, but it is necessary for most people, I agree. And I do agree with finishing a lot of the half-baked features too, but I'm unsure if the people maintaining the language have the will or the means to do that.
Do you have any other ideas about how D could stand out again?
pjmlp
3 months ago
It is what matters, as most companies pick languages based on SDKs, not the other way around being one pony trick and trying to solve everything with the same language.
That is why outside startups selling a specific product, most IT departments are polyglot.
For D to stand out, there must be a Rails, Docker like framework, something, that is getting such a buzz that makes early adopters want to go play with D.
However I don't see it happening on LLM age, where at a whim of a prompt thoughts can be generated in whatever language, which is only a transition step until we start having agent runtimes.
WalterBright
3 months ago
C#, Java and C++ have poor copies of the D features. For example, C++ constexpr is a bad design because a special keyword to signify evaluating at compile time is completely redundant. (Just trigger on constexpr in the grammar.)
C++ modules, well, they should have more closely copied D modules!
pjmlp
3 months ago
They might have, still they are good enough to keep people on those ecosystems and not bother at all with D, and tbe current adoption warts concerning IDE tooling and available libraries.
Worse is better approach tends to always win.
WalterBright
3 months ago
Good enough is fine until one discovers how much better it can be.
pjmlp
3 months ago
Unfortunately the opportunity window is gone now, and in the LLM age programming languages are becoming less relevant, when programming itself increasingly turns into coordinating agents.
az09mugen
3 months ago
I have a very naïve and maybe dumb question coming from someone who is used to scripting languages. It's about the `auto` keyword, while being a nice feature, why is it necessary to write it down ? Isn't it possible to basically say to the compiler : "Hey you see this var declared with no type ? Assume by yourself there is an `auto` keyword."
mort96
3 months ago
I feel like back when D might've been a language worth looking into, it was hampered by the proprietary compilers.
And still today, the first thought that comes to mind when I think D is "that language with proprietary compilers", even though there has apparently been some movement on that front? Not really worth looking into now that we have Go as an excellent GC'd compiled language and Rust as an excellent C++ replacement.
Having two different languages for those purposes seems like a better idea anyway than having one "optionally managed" language. I can't even imagine how that could possibly work in a way that doesn't just fragment the community.
schveiguy
3 months ago
D has been fully unproprietary since 2017. https://forum.dlang.org/post/oc8acc$1ei9$1@digitalmars.com
But that's only the reference compiler, DMD. The other two compilers were fully open source (including gcc, which includes D) before that.
Fully disagree on your position that having all possibilities with one language is bad. When you have a nice language, it's nice to write with it for all things.
sfpotter
3 months ago
Sounds like you should look into it instead of idly speculating! Also, the funny thing about a divisive feature is that it doesn't matter if it fragments the community if you can use it successfully. There are a lot of loud people in the D community who freak out and whine about the GC, and there are plenty more quiet ones who are happily getting things done without making much noise. It's a great language.
mort96
3 months ago
Are you saying that if I'm using D-without-GC, I can use any D library, including ones written with the assumption that there is a GC? If not, how does it not fracture the community?
> There are a lot of loud people in the D community who freak out and whine about the GC, and there are plenty more quiet ones who are happily getting things done without making much noise
This sounds like an admission that the community is fractured, except with a weirdly judgemental tone towards those who use D without a GC?
MrRadar
3 months ago
> Are you saying that if I'm using D-without-GC, I can use any D library, including ones written with the assumption that there is a GC? If not, how does it not fracture the community?
"Are you saying that if I'm using Rust in the Linux kernel, I can use any Rust library, including ones written with the assumption they will be running in userspace? If not, how does that not fracture the community?"
"Are you saying that if I'm using C++ in an embedded environment without runtime type information and exceptions, I can use any C++ library, including ones written with the assumption they can use RTTI/exceptions? If not, how does that not fracture the community?"
You can make this argument about a lot of languages and particular subsets/restrictions on them that are needed in specific circumstances. If you need to write GC-free code in D you can do it. Yes, it restricts what parts of the library ecosystem you can use, but that's not different from any other langauge that has wide adoption in a wide variety of applications. It turns out that in reality most applications don't need to be GC-free (the massive preponderance of GC languages is indicative of this) and GC makes them much easier and safer to write.
I think most people in the D community are tired of people (especially outsiders) constantly rehashing discussions about GC. It was a much more salient topic before the core language supported no-GC mode, but now that it does it's up to individuals to decide what the cost/benefit analysis is for writing GC vs no-GC code (including the availability of third-party libraries in each mode).
mort96
3 months ago
The RTTI vs no-RTTI thing and the exceptions vs no-exceptions thing definitely does fracture the C++ community to some degree, and plenty of people have rightly criticized C++ for it.
> If you need to write GC-free code in D you can do it.
This seems correct, with the emphasis. Plenty of people make it sound like the GC in D is no problem because it's optional, so if you don't want GC you can just write D without a GC. It's a bit like saying that the stdlib in Rust is no problem because you can just use no_std, or that exceptions in C++ is no problem because you can just use -fno-exceptions. All these things are naïve for the same reason; it locks you out of most of the ecosystem.
sfpotter
3 months ago
> This sounds like an admission that the community is fractured, except with a weirdly judgemental tone towards those who use D without a GC?
That's not what I'm saying, and who cares if it's fractured or not? Why should that influence your decision making?
There are people who complain loudly about the GC, and then there are lots of other people who do not complain loudly and also use D in many different interesting ways. Some use the GC, some don't. People get hyper fixated on the GC, but it isn't the only thing going on in the language.
mort96
3 months ago
> who cares if it's fractured or not? Why should that influence your decision making?
Because, if I want to write code in D without the GC, it impacts me negatively if I can't use most of the libraries created by the community.
sfpotter
3 months ago
What domain are you working in? It's hard to be able to say anything specific that might help you if you can't explain a little more clearly what you're working on.
I will say there are a larger and larger number of no-GC libraries. Phobos is getting an overhaul which is going to seriously reduce the amount that the GC is used.
It is probably also worth reflecting for a moment why the GC is causing problems for you. If you're doing something where you need some hard-ish realtime guarantee, bear in mind that the GC only collects when you allocate.
It's also possible to roll your own D runtime. I believe people have done things like replace the GC with allocators. The interface to the GC is not unduly complicated. It may be possible to come up with a creative solution to your problem.
mort96
3 months ago
I work in many domains, some where a GC is totally no problem, and some where I'd rather not have a GC (such as microcontrollers, games, the occasional kernel code, software running on embedded Linux devices with very limited memory).
I'm happy with C++ and Rust for those tasks. For other tasks where I want a GC, I'm perfectly happy with some combination of Go, Python and Typescript. I'm realistically never going to look into D.
sfpotter
3 months ago
I'm glad you've found a nice stack that you like working with! D has been used for every domain mentioned. Whether or not you look into D in the future is no business of mine.
giancarlostoro
3 months ago
Go is a GC language that has eaten a chunk of the industry (Docker, TypeScript, Kubernetes... Minio... and many more I'm sure) and only some people cry about it, but you know who else owns sizable chunks of the industry? Java and C# which are both GC languages. While some people waste hours crying about GCs the rest of us have built the future around it. Hell, all of AI is eaten by Python another GC language.
sfpotter
3 months ago
And in D, there's nothing stopping from either using or not using the GC. One of the key features of D is that it's possible to mix and match different memory management strategies. Maybe I have a low level computational kernel written C-style with memory management, and then for scripting I have a quick and dirty implementation of Scheme also written in D but using the GC. Perfectly fine for those two things to co-exist in the same codebase, and in fact having them coexist like that is useful.
mort96
3 months ago
> And in D, there's nothing stopping from either using or not using the GC.
Wait so are you, or are you not, saying that a GC-less D program can use libraries written with the assumption that there's a GC? The statement "there's nothing stopping [you] from not using the GC" implies that all libraries work with D-without-GC, otherwise the lack of libraries written for D-without-GC would be stopping you from not using the GC
sfpotter
3 months ago
Sorry, but it's more complicated than this. I understand the point you're making, but if your desiderata for using a language is "If any feature prevents me from using 100% of the libraries that have been written for the language, then the language is of no use to me", well... I'm not sure what to tell you.
It's not all or nothing with the GC, as I explained in another reply. There are many libraries that use the GC, many that don't. If you're writing code with the assumption that you'll just be plugging together libraries to do all the heavy lifting, D may not be the right language for you. There is a definite DIY hacker mentality in the community. Flexibility in attitude and approach are rewarded.
Something else to consider is that the GC is often more useful for high level code while manual memory management is more useful for low level code. This is natural because you can always use non-GC (e.g. C) libraries from GC, but (as you point out) not necessarily the other way around. That's how the language is supposed to be used.
You can use the GC for a GUI or some other loose UI thing and drop down to tighter C style code for other things. The benefit is that you can do this in one language as opposed to using e.g. Python + C++. Debugging software written in a mixture of languages like this can be a nightmare. Maybe this feature is useful for you, maybe not. All depends on what you're trying to do.
mort96
3 months ago
You're the guy who said that "nothing is preventing you" from using D without a GC. A lack of libraries which work without a GC is something preventing you from using D without a GC. Just be honest.
sfpotter
3 months ago
I just said there are loads of libraries which have been written explicitly to work with the GC disabled. Did you read what I wrote?
It looks like you work in a ton of different domains. I think based on what I've written in response to you so far, it should be easy to see that D is likely a good fit for some things you work on and a bad fit for others. I don't see what the problem is.
The D community is full of really nice and interesting people who are fun to interact with. It's also got a smaller number of people who complain loudly about the GC. This latter contingency of people is perceived as being maybe a bit frustrating and unreasonable.
I don't care whether you check D out or not. But your initial foray into this thread was to cast shade on D by mentioning issues with proprietary compilers (hasn't been a thing in years), and insinuating that the community was fractured because of the GC. Since you clearly don't know that much about the language and have no vested interest in it, why not leave well enough alone instead of muddying the waters with misleading and biased commentary?
mort96
3 months ago
My conclusion remains that the language is fractured by the optional GC and that its adoption was severely hampered by the proprietary toolchain. Nothing you have said meaningfully challenges that in my opinion, and many things you've said supports it. I don't think anything I've said is misleading.
bw86
3 months ago
Rust is fractured by the optional async-ness of libraries. And all languages are fractured by GPL vs non-GPL libraries.
You have a point, but it is not worth the drama. D's biggest problem comes from the strong opinions of people that have not tried using it.
Kapendev
3 months ago
> language is fractured by the optional GC
That's an interesting fantasy you have constructed. You should try listening to the people who actually use it.
mort96
3 months ago
I did, I listened to sfpotter. From their description, it's the source of a great deal of animosity within the D community.
sfpotter
3 months ago
Well, no, it isn't. There's some frustration. But mostly what I've seen is a lot of lengthy conversations with a lot of give and take. There's a small number of loud people who complain intensely, but there are also people who are against the GC who write lots of libraries that avoid it and even push the language in a healthy direction. If these people hadn't argued so strenuously against the GC, I doubt Phobos would have started moving in the direction of using the GC less and less. This is actually the sign of a healthy community.
pests
3 months ago
Still nothing prevents you from rewriting those libraries to not use a GC.
Stop making excuses or expecting others to do your work for you.
Nothing is preventing you.
Clouudy
3 months ago
There are third party libraries for the language that don't use the GC, but as far as I know there isn't a standardized one that people pick.
timeinput
3 months ago
I'm not strongly for or against (non-deterministic) GC. Deterministic GC in Rust or the (there's no real Scotsman) correctly written C++ has benefits, but often I don't care and go / java / c# / python are all fine.
I think you're really overstepping with AI is eaten by python. I can imagine an AI stack with out python llama.cpp (for inference not training... isn't completely that, but most of the core functionality is not python, and not-GCd at all), I can not imagine an AI stack with out CUDA + C++. Even the premier python tools (pytorch, vllm) would be non-functional with out these tools.
While some very common interfaces to AI require a GC'd language I think if you deleted the non-GC parts you'd be completely stuck and have years of digging your self out, but if you deleted the 'GC' parts you can end up with a usable thing in very short order.
pjmlp
3 months ago
NVidia has decided that the market demand to do everything in Python justifies the development cost of making Python fast in CUDA.
Thus now you can use PTX directly from Python, and with the new cu Tiles approach, you can write CUDA kernels in a Python subset.
Many of these tools get combined because that is what is already there, and the large majority of us don't want, or has the resources, to spend bootstrapting a whole new world.
Until there is some monetary advantage in doing so.
pjmlp
3 months ago
For 99% of the industry use cases, some kind of GC is good enough, and even when that isn't the case, there is no need to throw away the baby with the babywater, a two language approach also works.
Unfortunely those that cry about GCs are still quite vocal, at least we can now throw Rust into their way.
mort96
3 months ago
There's a place for GC languages, and there's a place for non-GC languages. I don't understand why you seem so angry towards people who write in non-GC languages.
WalterBright
3 months ago
And there's a place for languages that smoothly support both GC and non-GC. D is the best language at that.
homebrewer
3 months ago
"AI" is built on C, C++, and Fortran, not Python.
WalterBright
3 months ago
Once one realizes that the GC is just another way to allocate memory in D, it becomes quite wonderful to have a diverse collection of memory management facilities at hand. They coexist quite smoothly. Why should programs be all GC or no GC? Why should you have to change languages to switch between them?
alphaglosined
3 months ago
Indeed the GC is just a library with some helpful language hooks to make the experience nice.
If you understand how it's hooked into, it's very easy to work with. There is only one area of the language related to closure context creation that can be unexpected.
giancarlostoro
3 months ago
I don't think the proprietary compilers is a true set back, look at for example C# before it became as open as .NET has become today (MIT licensed!) and yet the industry took it. I think what D needed was what made Ruby mainly relevant: Rails. D needs a community framework that makes it a strong candidate for a specific domain.
I honestly think if Walter Bright (or anyone within D) invested in having a serious web framework for D even if its not part of the standard library, it could be worth its weight in gold. Right now there's only Vibe.d that stands out but I have not seen it grow very much since its inception, its very slow moving. Give me a feature rich web framework in D comparable to Django or Rails and all my side projects will shift to D. The real issue is it needs to be batteries included since D does not have dozens of OOTB libraries to fill in gaps with.
Look at Go as an example, built-in HTTP server library, production ready, its not ultra fancy but it does the work.
mort96
3 months ago
C# has Microsoft behind it. D ... doesn't.
There are plenty of people who aren't interested in using languages with proprietary toolchains. Those people typically don't use C#. The people who don't mind proprietary toolchains typically write software for an environment where D isn't relevant, such as .NET or the Apple world.
neonsunset
3 months ago
[dead]
Clouudy
3 months ago
I do agree with you that there needs to be a good framework though. Either in Web or Games. Web because it's more familiar than Go but also has Fibers, and Games because it's an easier C++. There is also Inochi2D which looks rather professional: https://inochi2d.com/
One of the issues I've seen in the community is just that there aren't enough people in the community with enough interest and enough spare time to spend on a large project. Everyone in the core team is focused on working on the actual language (and day-jobs), while everyone else is doing their own sort of thing.
From your profile you seem to have a lot of experience in the field and in software in general, so I'd like to ask you if you have any other advice for getting the language un-stuck, especially with regards to the personnel issues. I think I'd like to take up your proposal for a web framework as well, but I don't really have any knowledge of web programming beyond the basics. Do you have any advice on where to start or what features/use case would be best as well?
alphaglosined
3 months ago
Getting a web framework into the standard library is something I want to get working, along with a windowing library.
Currently we need to get a stackless coroutine into the language, actors for windowing event handling, reference counting and a better escape analysis story to make the experience really nice.
This work is not scheduled for PhobosV3 but a subset such as a web client with an event loop may be.
Lately I've been working on some exception handling improvements and start on the escape analysis DFA (but not on the escape analysis itself). So the work is progressing. Stackless coroutine proposal needs editing, but it is intended to be done at the start of next year for approval process.
pjmlp
3 months ago
Go would be an excellent GC'd compiled language if it actually learnt from history of computing.
I rather give that to languages like C# with Native AOT, or Swift (see chapter 5 of GC Handbook).
D only lacks someone like Google to push it into mainstream no matter what, like Go got to benefit from Docker and Kubernetes.
whizzter
3 months ago
As someone that had the dis-pleasure to work with Asn.1 data (yes, certificates) I fully symphatise with anguish you've gone through (that 6months of Ansible HR comments cracked me up also :D ).
BradleyChatha
3 months ago
It makes me laugh that absolutely no one can say "I've worked with ASN.1" in a positive light :D
cryptonector
3 months ago
Bzzt! Wrong! I have worked with ASN.1 for many years, and I love ASN.1. :)
Really, I do.
In particular I like:
- that ASN.1 is generic, not specific to a given encoding rules (compare to XDR, which is both a syntax and a codec specification)
- that ASN.1 lets you get quite formal if you want to in your specifications
For example, RFC 5280 is the base PKIX spec, and if you look at RFCs 5911 and 5912 you'll see the same types (and those of other PKIX-related RFCs) with more formalisms. I use those formalisms in the ASN.1 tooling I maintain to implement a recursive, one-shot codec for certificates in all their glory.
- that ASN.1 has been through the whole evolution of "hey, TLV rules are all you need and you get extensibility for free!!1!" through "oh no, no that's not quite right is it" through "we should add extensibility functionality" and "hmm, tags should not really have to appear in modules, so let's add AUTOMATIC tagging" and "well, let's support lots of encoding rules, like non-TLV binary ones (PER, OER) and XML and JSON!".
Protocol Buffers is still stuck on TLV, all done badly by comparison to BER/DER.
BradleyChatha
3 months ago
Yeah I know I'm making fun of it a lot (mostly in jest) but it genuinely is a really interesting specification, and it's definitely sad - but not surprising - it's not a very popular choice outside of its few niche areas.
:) Glad to see someone else who's gone down this road as well.
yujzgzc
3 months ago
I feel the experience of many people writing with ASN.1 is that of dealing with PKI or telecom protocols, which attempt to build worldwide interop between actually very different systems. The spec is one thing, but implementing it by the book is not sufficient to get something actually interoperable, there are a ton of quirks to work around.
If it was being used in homogenous environments the way protocol buffers typically are, where the schemas are usually more reasonable and both read and write side are owned by the same entity, it might not have gotten such a bad rap...
zzo38computer
3 months ago
I also like ASN.1; I think it is better than JSON, XML, Protocol Buffers, etc, in many ways. I use it in some of my programs.
(However, like many other formats (including JSON, XML, etc), ASN.1 can be badly used.)
tambre
3 months ago
How do you feel about something like CBOR? In which stage would you say it's stuck in evolution compared to ASN.1 (since you said Protobuf is still TLV)?
cryptonector
3 months ago
CBOR and JSON are just encodings, not schema, though there are schemas for them. I've not looked at their schema languages but I doubt they support typed hole formalisms (though they could be added as it's just schema). And since CBOR and JSON are just encodings, they are stuck being what they are -- new encodings will have compatibility problems. For example, CBOR is mostly just like JSON but with a few new types, but then things like jq have to evolve too or else those new types are not really usable. Whereas ASN.1 has much more freedom to introduce new types and new encoding rules because ASN.1 is schema and just because you introduce a new type doesn't mean that existing code has to accept it since you will evolve _protocols_. But to be fair JSON is incredibly useful sans schema, while ASN.1 is really not useful at all if you want to avoid defining modules (schemas).
tambre
3 months ago
I was considering CBOR+CDDL heavily for a project a while so they're a tad intertwined in my head. I very much liked CBOR's capability of being able to define wholly new types and describe them neatly in CDDL. You could even add some basic value constraints (less than, greater equal, etc.). That seemed really powerful and lacking ASN.1 experience it sounds like a very lite JSON-like subset of that.
coderjames
3 months ago
I worked with ASN.1 for a few years in the embedded space because its used for communications between aircraft and air traffic control in Europe [1]. I enjoyed it. BER encoding is pretty much the tightest way to represent messages on the wire and when you're charged per-bit for messaging, it all adds up. When a messaging syntax is defined in ASN.1 in an international standard (ICAO 9880 anyone?), its going to be around for a while. Haven't been able to get my current company to adopt ASN.1 to replace our existing homegrown serialization format.
[1] https://en.wikipedia.org/wiki/Aeronautical_Telecommunication...
p_l
3 months ago
Isn't PER or OER more compact? especially for the per-bit charging thing
coderjames
3 months ago
Oh yeah, derp. I was thinking unaligned-PER, not BER.
lepicz
3 months ago
of all the encoding i like BER the most as well
(i worked in telecommunications when ASN.1 was common thing)
hamburglar
3 months ago
As a former PKI enthusiast (tongue firmly in cheek with that description) I can say if you can limit your exposure to simply issuing certs so you control the data and thus avoid all edge cases, quirks, non-canonical encodings, etc, dealing with ASN.1 is “not too terrible.” But it is bad. The thing that used to regularly amaze me was the insane depths of complexity the designers went to … back in the 70’s! It is astounding to me that they managed to make a system that encapsulated so much complexity and is still in everyday use today.
You are truly a masochist and I salute you.
cryptonector
3 months ago
ASN.1 is from the mid-80s, and PKI is from the late 80s.
The problems with PKI/PKIX all go back to terrible, awful, no good, very bad ideas about naming that people in the OSI/European world had in the 80s -- the whole x.400/x.500 naming style where they expected people to use something like street addresses as digital names. DNS already existed, but it seems almost like those folks didn't get the memo, or didn't like it.
noAnswer
3 months ago
They got grant money to work on anything but TCP/IP. :-) A lot of European oral history about how "the Internet" got to a Uni talks about how they were supposed to only use ISO/OSI but eventually unofficially installed IP anyway.
cryptonector
3 months ago
But of course.
p_l
3 months ago
There's the other story of corporate vendors saying "yes, we will implement OSI, give us X time but buy our product now and we will deliver OSI" then actually going "we mangled BSD Sockets enough to work if you squint enough, let's try to wait the client off while racking the profit"
yujzgzc
3 months ago
Organizational unit, location, etc all these concepts were pretty dumb to tie with digital identity in retrospect.
p_l
3 months ago
Unless you need things like ability to address groups in flexible ways, which is why X.400 survives in various places (in addition to actually supporting inline cryptography and binary attachments).
What people forget is that you do not have to use the whole set of schema attributes.
cryptonector
3 months ago
Does Internet email not support binary attachments? Of course it does.
And encrypted and/or signed email? That too, though very poorly, but the issue there is key management, and DAP/LDAP don't help because in the age of spam public directories are not a thing. Right now the best option for cryptographic security for email is hop-bby-hop encryption using DANE for authentication in SMTP, with headers for requesting this, and headers for indicating whether received email transited with cryptographic protection all the way from sender to recipient.
As for the "ability to address groups in flexible ways", I'm not sure what that means, but I've never see group distribution lists not be sufficient.
p_l
3 months ago
And how long did it take for binary attachments to be reliable, encodings unfucked, etc?
As for group addressing, distribution lists are pitiful in comparison especially on discovery side.
Anyway, ultimately the big issue is that the DAP schema is always presented as "oh you need all the details", when... you don't. And we never got to point of really implementing things well outside the more expected use case where people do not, actually, use them directly but pick by name/function from directory.
cryptonector
3 months ago
> And how long did it take for binary attachments to be reliable, encodings unfucked, etc?
Oh I can't remember. Binary attachments have worked since I started using them long long ago. It worked at least in the mind-90s. Back then I was using both, Internet email and x.400 (HP OpenMail!), and x.400 was a massive pain (for me especially since I was one of the people who maintained a gateway between the two). I know what you're referring to: it took a long time for email to get "8-bit clean" / MIME because of the way SMTP works, but MIME was very much a thing by the mid-90s.
So it took a while if you count the days of UUCP email -- round it to two decades. But "by the md-90s" was plenty good enough because that's when the Internet revolution hit big companies. Lack of binary attachments wasn't something that held back Internet adoption. As far as the public and corps are concerned the Internet only became a thing circa 1994 anyways.
> As for group addressing, distribution lists are pitiful in comparison especially on discovery side.
Discovery, meaning directories. Those are nice inside corporate networks, which is where you need this functionality, so I agree, and yes people use Exchange / Exchange 365 / Outlook for this sort of thing, though even mutt can do LDAP-based discovery (poorly, but yes). Outside corporate networks directories are only useful within academia and governments / government labs. Outside all of that no one wants directories because they would only encourage the spammers.
p_l
3 months ago
Binary attachments mostly started to work with less surprises by second half of 1990s, but 8bit unclean issues persisted in my experience... I wanted to say 2001, but I recalled getting hit by them until 2010 at least.
And in some ways desire of people to send non-7bit-ascii text as email is also a continued brokenness in SMTP email.
As for directories - my point was more that directories hid the addressing details from surface UI. Otherwise AFAIK X.400 works perfectly fine without using everything in the possible schema.
Fun fact - Exchange is actually X.400 system, despite no longer having non-SMTP connection options. But its internals are even wonkier, like Exchange and Outlook not supporting HTML Email (no, really, MAPI.DLL crashes on HTML email. If you send/receive HTML email it's stored as HTML wrapped in RTF, and unwrapped when sent elsewhere)
cyberax
3 months ago
It's also amazing that we're basically using only a couple of free-form text fields in the WebPKI for the most crucial parts of validation.
Completely ignoring the ASN.1 support for complicated structures, with more than one CVE linked to incorrect parsing of these text fields m
cryptonector
3 months ago
No we're not. We're using dNSName subjectAlternativeName values. We used to use the CN attribute of the subject DN, and... there is still code for that, but it's obsolete.
We _are_ using subject DNs for linking certs to their issuers, but though that's "free-form", we don't parse them, we only check for equality.
cyberax
3 months ago
CN is absolutely used everywhere. And it can contain wildcards. SANs are also free-form.
cryptonector
3 months ago
SANs are not free-form. A dNSName SAN is supposed to have an FQDN. An rfc822Name SAN is supposed to carry an email address. And, ok, sure, email addresses' mailbox part is basically free-form, but so what, you don't interpret that part unless you've accepted that certificate for that email address' domain part, and then you interpret the mailbox part the way a mail server would because you're probably the mail server. Yes, you can have directoryName SANs, but the whole point of SANs is that DNs suck because x.400/x.500 naming sucks so we want to use something that isn't that.
cyberax
3 months ago
> to have an FQDN
With wildcards.
cryptonector
3 months ago
Ah yes, you're right. That is a horrible bodge.
StopDisinfo910
3 months ago
There was an amusing chain of comments the last time protobuf was mentionned in which some people were arguing that it had been a terrible idea and ASN.1, as a standard, should have been used.
It was hilarious because clearly none of the people who were in favor had ever used ASN.1.
mananaysiempre
3 months ago
Cryptonector[1] maintains an ASN.1 implementation[2] and usually has good things to say about the language and its specs. (Kind of surprised not he’s not in the comments here already :) )
cryptonector
3 months ago
Thanks for the shout-out! Yes, I do have nice things to say about ASN.1. It's all the others that mostly suck, with a few exceptions like XDR and DCE/Microsoft RPC's IDL.
mananaysiempre
3 months ago
Derail accepted! Is your approval of DCE based only on the serialization not being TLV or on something else too? I have to say, while I do think its IDL is tasteful, its only real distinguishing feature is AFAICT the array-passing/returning stuff, and that feels much too specialized to make sense of in anything but C (or largely-isomorphic low-level languages like vernacular varieties of Pascal).
cryptonector
3 months ago
Well, I do disapprove of the RPC-centric nature of both, XDR and DCE RPC, and I disapprove of the emphasis on "pointers" and -in the case of DCE- support for circular data structures and such. The 1980s penchant for "look ma'! I can have local things that are remote and you can't tell because I'm pretending that latency isn't part of the API hahahaha" research really shows in these. But yeah, at least they ain't TLV encodings, and the syntax is alright.
I especially like XDR, though maybe that's because I worked at Sun Microsystems :)
"Pointers" in XDR are really just `OPTIONAL` in ASN.1. Seems so silly to call them pointers. The reason they called them "pointers" is that that's how they represented optionality in the generated structures and code: if the field was present on the wire then the pointer is not null, and if it was absent the then pointer is null. And that's exactly what one does in ASN.1 tooling, though maybe with a host language Optional<> type rather than with pointers and null values. Whereas in hand-coded ASN.1 codecs one does sometimes see special values used as if the member had been `DEFAULT` rather than `OPTIONAL`.
cryptonector
3 months ago
You're likely to find my comments among those saying that. I've been using ASN.1 in some way for a couple of decades, and I've been an ASN.1 implementor for about half a decade.
whizzter
3 months ago
It's not entirely horrible, parsing DER dynamically enough to handle interpreting most common certificates can be done in some 200-300 lines of C#, so I'd take that any day over XML.
The main problem is that to work with the data you need to understand the semantics of the magic object identifiers and while things like the PKIX module can be found easily, the definitions for other more obscure namespaces for extensions can be harder to locate as it's scattered in documentation from various standardization organizations.
So, protobuf could very well have been transported in DER, the problem issue was probably more one of Google not seeing any value of interoperability and wanting to keep it simple (or worse, clashing by oblivious users re-using the wrong less well documented namespaces).
thayne
3 months ago
ASN.1 seems like something that could have been good ... If it was less complicated, had more accessible documentation, and had better tooling.
HelloNurse
3 months ago
I suspect that typical interactions with ASN.1 are benign because people are interested in reading and writing a few specific preexisting data structures with whatever encoding is required for interoperability, not in designing new message structures and choosing encodings for them.
For example, when I inherited a public key signature system (mainly retrieving certificates and feeding them to cryptographic primitives and downloading and checking certificate revocation lists) everything troublesome was left by dismissed consultants; there were libraries for dealing with ASN.1 and I only had to educate myself about the different messages and their structure, like with any other standard protocol.
i2pi
3 months ago
(void) space. :P
olvy0
3 months ago
Just wanted to say I enjoyed your post very much. Thank you for writing it. I love D but unfortunately I haven't touched it for several years. I also have some experience writing parsers and implementing protocols.
BradleyChatha
3 months ago
Thank you :)
throw_a_grenade
3 months ago
Don't worry, it's your blog, and your way. Keep it up, if it makes you whole.