Animats
8 days ago
As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine. The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
e40
7 days ago
I worked on Franz Lisp at UCB. A couple of points:
The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.
The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.
Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.
I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.
Animats
7 days ago
I was doing this in 1980-1983. Here's some code.[1] It's been partly converted to Common LISP, but I was unable to get some of the macros to work.
This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.
We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.
specialgoodness
7 days ago
The Nelson-Oppen simplifier is a great piece of work, but it is not the first SAT solver. Boyer and Moore published their formally verified SAT solver in their 1979 A Computational Logic, the first book on the Boyer-Moore Theorem Prover, though it was first implemented I believe in 1973. This algorithm, based on IF-normalization and lifting, was also a core part of the original Boyer-Moore prover. One interesting note is that it actually was almost an earlier discovery of BDDs - they have the core BDD data structure and normalization algorithm but were just missing memoization and the fact that orderings on variables induce canonicity for checking boolean equivalence! But in any case, Boyer-Moore had a (formally verified, even!) implemented and used SAT solver long before Nelson and Oppen.
e40
7 days ago
Do you remember who you discussed it with? It had to be either Sklower or Foderaro, unless you talked with Fateman.
Were the macros originally from another dialect of Lisp?
Animats
7 days ago
Franz LISP had, I think, MacLISP macros, while Common LISP has a different system.
I talked to Fateman at some point. Too long ago to remember about what.
dtagames
a day ago
You've hit the nail on the head. It's the inability to ship product from a Lisp machine that killed the idea. The fungibility of Lisp all the way down turned each machine into a one-off lab of its own. And this tech arrived after the industry had started moving from bespoke solutions provided by hardware companies to packaged software from specialized firms. Since Lisp machines aren't good at maintaining a duplicatable environment, they're not terribly useful for commercial software production nor as a distribution target.
Source: I was a mainframe compiler developer at IBM during this era.
varjag
8 days ago
Lisp Machines had versioning file systems IIRC. Kinda like on VMS. Was SCCS really that far ahead?
kragen
7 days ago
Yes, because on VMS (and presumably Genera) 20 versions of a file took 20× as much disk space as one version, so you wouldn't keep unlimited versions. In SCCS the lines that didn't change are only stored once, so 20 versions might be 2× or 1.1× or 1.01× the original file size.
johnisgood
7 days ago
You are correct, see: https://en.wikipedia.org/wiki/Versioning_file_system#LMFS.
Also: https://hanshuebner.github.io/lmman/pathnm.xml
It is worth mentioning that while it is not versioning per se, APFS and ZFS support instantaneous snapshots and clones as well.
Btrfs supports snapshots, too.
HAMMER2 in DragonFlyBSD has the ability to store revisions in the filesystem.
rst
7 days ago
Ummmm... yes. The problem with versioning file systems is that they only kept the last few versions; for files under active development, it was usually difficult to recover state older than a week or two.
(SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)
rjsw
7 days ago
Franz Lisp could create standalone executables from very early in the project, the compiler is one.
e40
7 days ago
Correct. To continue the puns, it was called Liszt.
MangoToupe
7 days ago
> The hardware was not very good.
The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).
raverbashing
8 days ago
> "So we could ship a product." This was an alien concept to them.
This mentality seems to have carried over to (most) modern FP stacks
whstl
8 days ago
Nah, it carried over to scripting languages.
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
graemep
7 days ago
TCL has good solutions for this, but its not made it a success.
Where I see Python used is in places where you do not need it packaged as executables:
1. Linux - where the package manager solves the problem. I use multiple GUI apps written in python
2. On servers - e.g. Django web apps, where the the environment is set up per application
3. Code written for specific environments - even for specific hardware
4. One off installs - again, you have a specified target environment.
In none of the above cases do I find the environment to be fragile. On the other hand, if you are trying to distribute a Windows app to a large number of users I would expect it to be problematic.
whstl
7 days ago
You don't find the environment to be fragile because millions of human hours have been spent fixing those problems or working around them.
Which is significantly more than was needed for different technologies to achieve similar results.
mr_toad
7 days ago
But people start by hacking away with one-off installs written for their specific environments, get it to the point where it’s useful to others, and then expect others to install all the tools and dependencies needed to install it.
Quick start guide: works on my machine.
throwaway81523
7 days ago
It seems the other way to me, maintaining environment consistency is such a pain that even a 5 line Python script ends up getting packaged in its own Docker container.
jacquesm
8 days ago
Don't get me started. I tried to use a very simply python program the other day, to talk to a bluetooth module in a device I'm building. In the end I gave up and wrote the whole thing in another language, but that wasn't before fighting the python package system for a couple of hours thinking the solution is right around the corner, if only I can get rid of one more little conflict. Python is funny that way, it infantilized programming but then required you to become an expert at resolving package manager conflicts.
For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.
foobarian
7 days ago
You and me both. These days I don't even try, just docker pull python and docker run -v .:/app python /app/foo.py
eternityforest
7 days ago
I don't generally see this kind of issue with UV, at least not with the ultra popular libraries.
With the exception of Gstreamer. I use some awful hacks to break out of virtual environments and use the system Gstreamer, because it's not on PyPi....
arethuza
7 days ago
I thought that was just me - I really rather liked Python the language but was completely confused at how the package system seemed to work.... Mind you this was 12 years ago or so but it was enough to put me off using it ever again.
iLemming
7 days ago
Yeah, it's still shitty. So often I have to go through some weird hoops to even just run the tests for a project with commits made last week. I can't even ask Claude to explain something about any given repo, it naively tries to run them tests, only to hit the wall. The number of different linters and checkers we have to run on CI just to make sure things are in good state, yet every time I clone something and try to get it running, almost always some kind of bullcrap. Why the fuck we even keep trying to write things in Python, I just don't get it.
ErroneousBosh
8 days ago
[flagged]
iLemming
7 days ago
> you're not very good at computers.
Yup, I guess I am not. Been coding for over 20 years, went through over a dozen different PLs and only Python - the best fucking friend who doesn't sugar coat it, tells you without stuttering - "you suck at this, buddy"
# PEP 9001: Constructive Computing Feedback
## Abstract
This PEP proposes a standardized error message for situations where Python interpreters shall inform the user of insufficient computational competence.
## Specification
When a user attempts to execute Python code that supposedly should work but it doesn't, the interpreter shall emit:
You're suck at computersErroneousBosh
7 days ago
[flagged]
iLemming
7 days ago
It's not hard. It's just annoying to deal with this shit on constant basis. Like just the other day, the tests wouldn't pass locally, while they're passing on CI. I was scratching my head for sometime, turns out there was breaking change in csv.QUOTE_STRINGS or something, between 3.12 and 3.13 of Python. How the fuck did they manage to fix/improve fucking csv logic introducing a breaking change?
whstl
6 days ago
I'm always suspicious of people who go "this is easy" as a way to put others down.
...especially when it's about problems that are universally accepted as not being trivial, and often require entire teams and ecosystems (Docker, Poetry, uv) to solve in scale.
jacquesm
8 days ago
This is such a hilarious comment.
Thank you for making my day.
DonHopkins
8 days ago
Hey Gen Z, as long as I have you on the line, could you please explain 67 to me?
I've heard of "68 and I'll owe you one", so is 67 about owing you two?
jacquesm
8 days ago
I'm having a hard time coping with my social media addiction while doing some fairly hardcore development on an STM32 based platform so sorry :)
Incidentally, when will you (multiple) come and visit?
It's been too long.
DonHopkins
8 days ago
I owe you at least one or two! Maybe we can test your drones out on that Russian guy with the GoFundMe campaign, then I'll owe you three! ;)
ux266478
7 days ago
thats a gen alpha thing sorry unc
s0sa
8 days ago
Oh yeah? Well the jerk store called, and they’re running out of you!
whstl
6 days ago
His wife is in a coma
raverbashing
8 days ago
You are correct unfortunately
logicprog
8 days ago
Yeah, anytime I see a useful tool, and then find out it's written in Python, I want to kms — ofc, unless it happens to work with UV, but they don't always
rmunn
8 days ago
Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
raverbashing
8 days ago
True but some still wants me to understand what a monofunctor is or something that sounds like a disease to do things like print to screen or get a random number
I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)
Pure functions are a good theoretical exercise but they can't exist in practice.
jacquesm
8 days ago
> Pure functions are a good theoretical exercise but they can't exist in practice.
Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.
DonHopkins
8 days ago
Yes, but they're "Hello world!" hostile, so traditional programming language pedagogy doesn't work well.
Q: How many Prolog programmers does it take to change a lightbulb?
A: Yes.
mchaver
8 days ago
I imagine LLMs have already thrown traditional programming language pedagogy out the window.
raverbashing
7 days ago
Yes I agree, pure functions are good building blocks (for the most part), but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough
Also have you managed to eliminate the side effect of your IP register changing when your program is running? ;)
mr_toad
7 days ago
> but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough
I find that both Python and Javascript allow you to use functional code when appropriate, without forcing you to use it when it isn’t.
dreamcompiler
7 days ago
I love FP but at the end of the day registers are global variables. Half of modern compiler theory consists of workarounds for this sad truth.
lucas_membrane
7 days ago
A functional program is an a self-contained expression -- an isolated system following its own rules. The foremost example we have of such a thing is the universe itself, but the universe is not a good example in this discussion, because we have plenty of reasons to think that the universe contains pure (not pseudo-) randomness. Beyond that, isolation , when it matters, is not an easily proven proposition, and is a deplorable fantasy when assumed in many of the other science and engineering disciplines.
roryc89
8 days ago
In most FP languages it is simple to print to screen and get a random number.
Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.
Here's a Haskell example with all of the above:
import System.Random (randomRIO)
main :: IO ()
main = do
num <- randomRIO (1, 100)
print $ pureFunction num
pureFunction :: Int -> Int
pureFunction x = x * x + 2 * x + 1iLemming
7 days ago
There's 'FP stacks' and "FP stacks" and some aren't expressly similar. Volumes of money/data get handled by FP stacks - Jane Street famously uses OCaml; Cisco runs their entire cybersec backend on Clojure; Nubank covers entire Latin America and about to spread into the US - runs on Clojure on Elixir; Apple has their payment system, Walmart their billing, Netlfix their analytics on Clojure; Funding Circle in Europe and Splash in the US; etc. etc. There are tons of actual working products built on FP stacks. Just because your object-oriented brain can't pattern match the reality, it doesn't mean it's not happening.
dbtc
8 days ago
Wouldn't the whole system be the product then? There's tradeoffs, but that's just integration.
vrighter
7 days ago
python comes to mind here. I have almost never had a deployment go smoothly.