AdieuToLogic
5 months ago
Perhaps the most telling portion of their decision is:
Quality concerns. Popular LLMs are really great at
generating plausibly looking, but meaningless content. They
are capable of providing good assistance if you are careful
enough, but we can't really rely on that. At this point,
they pose both the risk of lowering the quality of Gentoo
projects, and of requiring an unfair human effort from
developers and users to review contributions and detect the
mistakes resulting from the use of AI.
The first non-title sentence is the most notable to consider, with the rest providing reasoning difficult to refute.jjmarr
5 months ago
I've been using AI to contribute to LLVM, which has a liberal policy.
The code is of terrible quality and I am at 100+ comments on my latest PR.
That being said, my latest PR is my second-ever to LLVM and is an entire linter check. I am learning far more about compilers at a much faster pace than if I took the "normal route" of tiny bugfixes.
I also try to do review passes on my own code before asking for code review to show I care about quality.
LLMs increase review burden a ton but I would say it can be a fair tradeoff, because I'm learning quicker and can contribute at a level I otherwise couldn't. I feel like I will become a net-positive to the project much earlier than I otherwise would have.
edit: the PR in question. Unfortunately I've been on vacation and haven't touched it recently.
https://github.com/llvm/llvm-project/pull/146970
It's a community's decision whether to accept this tradeoff & I won't submit AI generated code if your project refuses it. I also believe that we can mitigate this tradeoff with strong social norms that a developer is responsible for understanding and explaining their AI-generated code.
bestham
5 months ago
IMO that is not your call to make, it is the reviews call to make. It is the reviewers resources you are spending to learn more quickly. You are consuming a “free” resource for personal gain because you feel that it is justified in your particular case. It would likely not scale and grind many projects to a halt at least temporarily if this was done at scale.
ororroro
5 months ago
The decision is made by llvm https://llvm.org/docs/FAQ.html#id4
BrenBarn
5 months ago
I would interpret this as similar to being able to take paper napkins or straws at a restaurant. You may be welcome to take napkins, but if you go around taking all the napkins from every dispenser you'll likely be kicked out and possibly they'll start keeping the napkins behind the counter in the future. Similarly if people start treating "you can contribute AI code to LLVM" as "feel free to submit nonsense you don't understand", I would not be surprised to see LLVM change its stance on the matter.
user
5 months ago
AdieuToLogic
5 months ago
> I've been using AI to contribute to LLVM, which has a liberal policy.
This is a different decision made by the LLVM project than the one made by Gentoo, which is neither right nor wrong IMHO.
> The code is of terrible quality and I am at 100+ comments on my latest PR.
This may be part of the justification of the published Gentoo policy. I am not a maintainer of same so cannot say for certain. I can say it is implied within their policy:
At this point, they pose both the risk of lowering the
quality of Gentoo projects, and of requiring an unfair
human effort from developers and users to review
contributions ...
> LLMs increase review burden a ton ...Hence the Gentoo policy.
> ... but I would say it can be a fair tradeoff, because I'm learning quicker and can contribute at a level I otherwise couldn't.
I get it. I really do.
I would also ask - of the requested changes reviewers have made, what percentage are due to LLM generated changes? If more than zero, does this corroborate the Gentoo policy position of:
Popular LLMs are really great at generating plausibly
looking, but meaningless content.
If "erroneous" or "invalid" where the adjective used instead of "meaningless"?jjmarr
5 months ago
I would also ask - of the requested changes reviewers have made, what percentage are due to LLM generated changes? If more than zero, does this corroborate the Gentoo policy position of "Popular LLMs are really great at generating plausibly looking, but meaningless content."
I can only speak for my own PR, but most requested changes were related to formatting and other stylistic issues that I didn't fully grasp as a new LLVM contributor. e.g. Not wrapping at 80 characters, forgetting to declare stuff as const, or formatting the documentation incorrectly.Previous codebases I've worked on during internships linted the first two in CI. And the documentation being formatted incorrectly is because I hand-wrote it without AI.
Out of the AI-related issues that I didn't catch, the biggest flaws were redundant comments and the use of string manipulation/parsing instead of AST manipulation. Useless comments are very common and I've gotten better at pruning them. The AI's insistence on hand-rolling stuff with strings was surprising and apparently LLVM-specific.
However, there was plenty of erroneous and invalid behaviour in the original AI-generated code, such as flagging `uint32_t` because the underlying type was an `unsigned int` (which wouldn't make sense as we want to replace `unsigned int` with `uint32_t`).
I prevented most of this from reaching the PR by writing good unit tests and having a clear vision of what the final result should look like. I believe this should be a basic requirement for trying to contribute AI-generated code to an open-source project but other people might not share the same belief.
AdieuToLogic
5 months ago
Thank you for sharing your experiences in using this approach. They are ones which cannot be ascertained from PR's alone.
> However, there was plenty of erroneous and invalid behaviour in the original AI-generated code ...
> I prevented most of this from reaching the PR by writing good unit tests and having a clear vision of what the final result should look like.
This identifies an interesting question in my mind:
If an LLM code generator is used, is it better to use
it for generating production code and writing tests
to verify or write production code and use LLM
generated code to produce tests to verify?
Assuming LLM code generation, my initial answer is the approach you took as the test suite would serve as an augmentation to whatever prompt(s) used. But I could also see a strong case made for using LLM code test suite generation in order to maximize functional coverage.Maybe this question would be a good candidate for an "Ask HN".
> I believe this should be a basic requirement for trying to contribute AI-generated code to an open-source project but other people might not share the same belief.
FWIW, I completely concur.
jjmarr
5 months ago
In my experience, LLMs are strongest when paired with an automated BS filter such as unit tests or linters. I use Cline and after every generation it reads VS Code's warnings & fixes them.
> If an LLM code generator is used, is it better to use it for generating production code and writing tests to verify or write production code and use LLM generated code to produce tests to verify?
I do both.
1. Vibe code the initial design with input on the API/architecture.
2. Use the AI to write tests.
3. Carefully scrutinize the test cases, which are much easier to review than the code.
4. Save both.
5. Go do something else and let the AI modify the code until the tests/linting/etc passes.
6. Review the final product, make edits, and create the PR.
The output of step 1 is guaranteed to be terrible/buggy and difficult to review for correctness, which is why I review the test cases instead because they provide concrete examples.
Step 5 eliminates most of the problems and frees me to review important stuff.
The whole reason I wrote the check is because AI keeps using `int` and I don't want it to.
user
5 months ago
benreesman
5 months ago
I'm a bit later in my career and I've been involved with modern machine learning for a long time which probably affects my views on this, but I can definitely relate to aspects of it.
I think there are a couple of good signals in what you've said but also some stuff (at least by implication/phrashing) that I would be mindful of.
The reason why I think your head is fundamentally in a good place is that you seem to be shooting for an outcome where already high effort stays high, and with the assistance of the tools your ambition can increase. That's very much my aspiration with it, and I think that's been the play for motivated hackers forever: become as capable as possible as quickly as possible by using every effort and resource. Certainly in my lifetime I've seen things like widely distributed source code in the 90s, Google a little later, StackOverflow indexed by Google, the mega-grep when I did the FAANG thing, and now the language models. They're all related (and I think less impressive/concerning to people who remember pre-SEO Google, that was up there with any LLM on "magic box with reasonable code").
But we all have to self-police on this because with any source of code we don't understand, the abstraction almost always leaks, and it's a slippery slope: you get a little tired or busy or lazy, it slips a bit, next thing you know the diff or project or system is jeopardized, and you're throwing long shots that compound.
I'm sure the reviewers can make their own call about whether you're in an ok place in terms of whether you're making a sincere effort or if you've slipped into the low-integrity zone (LLVM people are serious people), just be mindful that if you want the most out of it and to be welcome on projects and teams generally, you have to keep the gap between ability and scope in a band: pushing hard enough to need the tools and reviewers generous with their time is good, it's how you improve, but go too far and everyone loses because you stop learning and they could have prompted the bot themselves.
m463
5 months ago
This comment seems to use "I" a lot.
jlebar
5 months ago
As a former LLVM developer and reviewer, I want to say:
1. Good for you.
2. Ignore the haters in the comments.
> my latest PR is my second-ever to LLVM and is an entire linter check.
That is so awesome.
> The code is of terrible quality and I am at 100+ comments on my latest PR.
The LLVM reviewers are big kids. They know how to ignore a PR if they don't want to review it. Don't feel bad about wasting people's time. They'll let you know.
You might be surprised how many PRs even pre-LLMs had 100+ comments. There's a lot to learn. You clearly want to learn, so you'll get there and will soon be offering a net-positive contribution to this community (or the next one you join), if you aren't already.
Best of luck on your journey.
close04
5 months ago
> They know how to ignore a PR if they don't want to review it
How well does that scale as the number of such contributions increases and the triage process itself becomes a sizable effort?
LLMs can inadvertently create a sort of DDoS even with the best intentions, and mitigating it costs something.
sampullman
5 months ago
Wait and see, then change the policy based on what actually happens.
I sort of doubt that all of a sudden there's going to be tons of people wanting to make complex AI contributions to LLVM, but if there are just ban them at that point.
yeasku
5 months ago
It has happend to Curl.
danillonunes
5 months ago
Curl's case was related with its bug bounty, so when involving money the incentives are different.
jjmarr
5 months ago
Thanks. I graduated 3 months ago and this has been a huge help.
thesz
5 months ago
> You might be surprised how many PRs even pre-LLMs had 100+ comments
What about percentages?jubalfh
5 months ago
here's to the hope you get banned from contributing to llvm.
JonChesterfield
5 months ago
This is exciting. Thank for for raising the point. I've posted https://discourse.llvm.org/t/our-ai-policy-vs-code-of-conduc... to see what other people think of this. Thank you for your commit, and especially for not mentioning that it's AI generated code that you don't understand in the review, as it makes my point rather more forcefully than otherwise.
perching_aix
5 months ago
graceful...
> and especially for not mentioning that it's AI generated code
https://github.com/llvm/llvm-project/pull/146970#issuecommen...
irony really is dead
totallymike
5 months ago
Only bothering to mention it in response to one of many review comments is nearly the same as not disclosing it.
perching_aix
5 months ago
We might know the word "disclose" very different then. I'm amenable to taking issue with them not disclosing it up front, but then their guidelines - if the person above is to be believed - don't require it, and they did disclose it a few days after opening it. It was also not them responding to an allegation or anything, they disclosed it completely on their own terms. And that was two months ago.
I find that latter part particularly relevant, considering the hoopla is about AI bros being lazy dogs who can't be bothered to put in the hard work before attempting to contribute. Irony being then that the person above just took an intentionally cut short citation to paint the person in a somehow even more negative light than they'd have otherwise appeared in, while simultaneously not even bothering to review the conduct they're proposing to police to confirm it actually matches their knowingly uncharitable conjecture. Two wrongs not making a right or whatever.
JonChesterfield
5 months ago
I searched the commit message and the page github showed. That seems reasonable due diligence on my part. In particular, demanding lots of effort on the part of people to compensate for AI spam is rather at the root of why this trend is damaging.
It should be clear that my objection is to the mix of coc + ai in the context of llvm, not to this specific instance where someone is acting within the rules llvm has written down.
jjmarr
5 months ago
In the future I plan on disclosing the use of AI in the body of the original PR so it's clearer.
JonChesterfield
5 months ago
Thanks for digging that out, it was hidden in github's folding of many messages
totallymike
5 months ago
[flagged]
tomhow
5 months ago
You can't comment like this on Hacker News, no matter what you're replying to. You've been on HN a long time and we've never had to warn you before, but please take a moment to read the guidelines and make an effort to observe them, especially these ones:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
0000000000100
5 months ago
Go look at the PR man, it's pretty clear that he hasn't just dumped out LLM garbage and has put serious effort and understanding into the problem he's trying to solve.
It seems a little mean to tell him to stop coding forever when his intentions and efforts seem pretty positive for the health of the project.
thesz
5 months ago
One of resolved conversation contains a comment "you should warn about incorrect configuration in constructor, look how it is done in some-other-part-of-code."
This means that he did not put serious effort into understanding what, when and why others do in a highly structured project like LLVM. He "wrote" the code and then dumped "written" code into community to catch mistakes.
onli
5 months ago
That is normal for a new contributor. You can't reasonably expect knowledge of all the conventions of the project. There has to be effort to produce something good and not overload the maintainers, I agree, but missing such a detail is not a sign that is not happening here.
anal_reactor
5 months ago
Every hobby at some point turns into an exclusive, invitation-only club in order to maintain the quality of each individual's contribution, but then old members start to literally die and they're left wondering why the hobby died too. I feel like most people don't understand that any organization that wants to grow needs to sacrifice quality in order to attract new members.
StopDisinfo910
5 months ago
Have you ever contributed to a very large project like LLVM? I would say clearly not from the comment.
There are pitfalls everywhere. It’s not so small that you can get everything in your head with only a reading. You need to actually engage with the code via contributions to understand it. 100+ comments is not an exceptional amount for early contributions.
Anyway, LLVM is so complex I doubt you can actually vibcode anything valuable so there are probably a lot of actual work in the contribution.
There is a reason the community didn’t send them packing. Onboarding new comer is hard but it pays off.
thesz
5 months ago
> Have you ever contributed to a very large project like LLVM?
Oh, I did. Here's one: https://github.com/mariadb-corporation/mariadb-columnstore-e... > I would say clearly not from the comment.
Of course, you are wrong. > It’s not so small that you can get everything in your head with only a reading.
PSP/TSP recommends writing typical mistakes into a list and use it to self-review and to fix code before sending it into review.So, after reading code, one should write down what made him amazed and find out why it is so - whether it is a custom of a project or a peculiarity of code just read.
I actually have such a list for my work. Do you?
> You need to actually engage with the code via contributions to understand it. 100+ comments is not an exceptional amount for early contributions.
No, it is not. Dozens of comments on a PR is an exceptional amount. Early contributions should be small so that one can learn typical customs and mistakes for self review before attempting a big code change.That PR we discuss here contains a maintainer's requirement to remove excessive commenting - PR's author definitely did not do a codebase style matching cleanup job on his code before submission.
StopDisinfo910
5 months ago
The personal dig was unwarranted. I apologise.
> So, after reading code, one should write down what made him amazed and find out why it is so - whether it is a custom of a project or a peculiarity of code just read.
Sorry but that’s delusional.
The amount of people actually able to meaningfully read code, somehow identify what was so incredible it should be analysed despite being unfamiliar with the code base, maintain a list of their own likely error and self review is so vanishingly low it might as well not exist.
If that’s the bare a potential new contributor has to cross, you will get exactly none.
I’m personally glade LLVM disagree with you.
thesz
5 months ago
>The amount of people actually able to meaningfully read code, somehow identify what was so incredible it should be analysed despite being unfamiliar with the code base, maintain a list of their own likely error and self review is so vanishingly low it might as well not exist.
List of frequent mistakes gets collected after contributions (attempts). This is standard practice for high quality software development and can be learned and/or trained, including on one's own.LLVM, I just checked, does not have a formal list of code conventions and/or typical errors and mistakes. Could they have that list, we would not have the pleasure to discuss that. That PR we are discussing would be much more polished and there would be much less than several dozens of comments.
> If that’s the bare a potential new contributor has to cross, you will get exactly none.
You are making very strong statement, again.jjmarr
5 months ago
I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR. I also try to mitigate the code review burden by doing as much review as possible on my end & flagging what I don't understand.
If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.
h4ny
5 months ago
> I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR.
That's not what the GP mean. Just because a community doesn't disallow something doesn't mean it's the right thing to do.
> I also try to mitigate the code review burden by doing as much review as possible on my end
That's great but...
> & flagging what I don't understand.
It's absurd to me that people should commit code they don't understand. That is the problem. Just because you are allowed to commit AI-generated/assisted code does not mean that you should commit code that you don't understand.
The overhead to others of committing code that you don't understand then ask someone to review is a lot higher than asking someone for directions first so you can understand the problem and code you write.
> If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.
That's just not the point.
overfeed
5 months ago
> It's absurd to me that people should commit code they don't understand
The industrywide tsunami of tech debt arising from AI detritus[1] will be interesting to watch. Tech leadership is currently drunk on improved productivity metrics (via lines of code or number of PRs), but I bet velocity will slow down, and products be more brittle due to extraneous AI-generated, with a lag, so it won't be immediately apparent. Only teams with rigorous reviews will fare well in the long term, but may be punished in the short term for "not being as productive" as others.
1. From personal observation: when I'm in a hurry, I accept code that does more than is necessary to meet the requirements, or is merely not succinct. Where as pre-AI, less code would be merged with a "TBD" tacked on
jjmarr
5 months ago
I agree with more review. The reason I wrote the PR is because AI keeps using `int` in my codebase when modern coding guidelines suggest `size_t`, `uint32_t`, or something else modern.
huflungdung
5 months ago
[dead]
Phelinofist
5 months ago
Where did you disclose it?
Sayrus
5 months ago
Only after getting reviews so it is hidden by default: https://github.com/llvm/llvm-project/pull/146970#issuecommen...
optionalsquid
5 months ago
Disclosing that you used AI three days after making the PR, after 4 people had already commented on your code, doesn't sit right with me. That's the kind of thing that should be disclosed in the original PR message. Especially so if you are not confident in the generated code
anon22981
5 months ago
Sounds like a junior vibe coder with no understanding of software development trying to boost their CV. Or at least I hope that’s the case.
jjmarr
5 months ago
I graduated literally 3 months ago so that's my skill level.
I also have no idea what the social norms are for AI. I posted the comment after a friend on Discord said I should disclose my use of AI.
The underlying purpose of the PR is ironically because Cline and Copilot keep trying to use `int` when modern C++ coding standards suggest `size_t` (or something similar).
noosphr
5 months ago
That's no different to on boarding any new contributor. I cringe at the code I put out when I was 18.
On top of all that every open source project has a gray hair problem.
Telling people excited about a new tech to never contribute makes sure that all projects turn into templeOS when the lead maintainer moves on.
totallymike
5 months ago
Onboarding a new contributor implies you’re investing time into someone you’re confident will pay off over the long run as an asset to the project. Reviewing LLM slop doesn’t grant any of that, you’re just plugging thumbs into cracks in the glass until the slop-generating contributor gets bored and moves on to another project or feels like they got what they wanted, and then moves on to another project.
I accept that some projects allow this, and if they invite it, I guess I can’t say anything other than “good luck,” but to me it feels like long odds that any one contributor who starts out eager to make others wade through enough code to generate that many comments purely as a one-sided learning exercise will continue to remain invested in this project to the point where I feel glad to have invested in this particular pedagogy.
noosphr
5 months ago
>Onboarding a new contributor implies you’re investing time into someone you’re confident will pay off over the long run as an asset to the project.
No you don't. And if you're that entitled to people's time you will simply get no new contributors.
totallymike
5 months ago
I’ll grant you that, but at least a new contributor who actually writes the code they contribute has offered some level of reciprocity with respect to the time it takes to review their contributions.
Trying to understand a problem and taking some time to work out a solution proves that you’re actually trying to learn and be helpful, even if you’re green. Using a LLM to generate a nearly-thousand-line PR and yeeting it at the maintainers with a note that says “I don’t really know what this does” feels less hopeful.
I feel like a better use of an LLM would be to use it for guidance on where to look when trying to see how pieces fit together, or maybe get some understanding of what something is doing, and then by one’s own efforts actually construct the solution. Then, even if one only has a partial implementation, it would feel much more reasonable to open a WIP PR and say “is this on the right track?”
riehwvfbk
5 months ago
Not getting thousand line AI slop PRs from resume builders who are looking for a "LLVM contributor" bullet point before moving on is a net positive. Lack of such contributors is a feature, not a bug.
And you can't go and turn this around into "but the gate keeping!" You just said that expecting someone to learn and be an asset to a project is entitlement, so by definition someone with this attitude won't stick around.
Lastly, the reason that the resume builder wants the "LLVM contributor" bullet point in the first place is precisely because that normally takes effort. If it becomes known in the industry that getting it simply requires throwing some AI PR over the wall - the value of this signal will quickly diminish.
totallymike
5 months ago
Unrelated to my other point, I absolutely get wanting to lower barriers, but let’s not forget that templeOS was the religious vanity project of someone who could have had a lot to teach us if not for mental health issues that were extant early enough in the roots of the project as to poison the well of knowledge to be found there. And he didn’t just “move on,” he died.
While I legitimately do find templeOS to be a fascinating project, I don’t think there was anything to learn from it at a computer science level other than “oh look, an opinionated 64-bit operating environment that feels like classical computing and had a couple novel ideas”
I respect that instances like it are demonstrably few and far between, but don’t entertain its legacy far beyond that.
lelanthran
5 months ago
> While I legitimately do find templeOS to be a fascinating project, I don’t think there was anything to learn from it at a computer science level other than “oh look, an opinionated 64-bit operating environment that feels like classical computing and had a couple novel ideas”
I disagree, actually.
I think that his approach has a lot to teach aspiring architects of impossibly large and complex systems, such as "create a suitable language for your use-case if one does not exist. It need not be a whole new language, just a variation of an existing one that smooths out all the rough edges specific to your complex software".
His approach demonstrated very large gains in an unusually complicated product. I can point to projects written in modern languages that come nowhere close to being as high-velocity as his, because his approach was fine-tuned to the use-case of "high-velocity while including only the bare necessities of safety."
MangoToupe
5 months ago
I think the project and reviewers are both perfectly capable of making their own decisions about the best use of their own time. No need to act like a dick to someone willing to own up to their own behavior.
sethammons
5 months ago
Your final sentence moved me. Moved to flagging the post, that is.
fuoqi
5 months ago
Well, some people just operate under the "some of you may die, but it's a sacrifice I am willing to make" principle...
thrownawayohman
5 months ago
[flagged]
JohnBooty
5 months ago
There are a number of other issues such the ethical and environmental ones. However, this one in isolation...
Popular LLMs are really great at
generating plausibly looking, but meaningless
content. They are capable of providing good
assistance if you are careful enough
I'm struggling to understand this particular angle.Humans are capable of generating extremely poor code. Improperly supervised LLMs are capable of generating extremely poor code.
How is this is an LLM-specific problem?
I believe part of (or perhaps the entire) the argument here is that LLMs certainly enable more unqualified contributors to generate larger quantities of low-quality code than they would have been able to otherwise. Which... is true.
But still I'm not sure that LLMs are the problem here? Nobody should be submitting unexpected, large, hard-to-review quantities of code in the first place, LLM-aided or otherwise. It seems to me that LLMs are, at worst, exposing an existing flaw in the governance process of certain projects?
wodenokoto
5 months ago
It means, if you can’t write it, they don’t trust you to be able to evaluate it either.
As for humans who can’t write code, their code doesn’t tend to look like they can.
user
5 months ago
SAI_Peregrinus
5 months ago
> Nobody should be submitting unexpected, large, hard-to-review quantities of code in the first place,
Without LLMs, people are less likely to submit such PRs. With LLMs they're more likely to do so. This is based on recent increases in such PRs pretty much all projects have seen. Current LLMs are extremely sycophantic & encourage people to think they're brilliant revolutionary thinkers coming up with the best <ideas, code, etc> ever. Combined with the marketing of LLMs as experts it's pretty easy to see why some people fall for the hype & believe they're doing valuable work when they're really just dumping slop on the reviewers.
29athrowaway
5 months ago
LLMs trained on open source make the common mistakes that humans make.
wobfan
5 months ago
> make.
No, made. Which is a very important difference.
perching_aix
5 months ago
[flagged]
AdieuToLogic
5 months ago
[flagged]
johnfn
5 months ago
But it's also difficult to prove it correct by argument or evidence. "Refute" is typically used in a context that suggests that the thing we're refuting has a strong likelihood of being true. This is only difficult to prove incorrect because it's a summary of the author's opinion.
perching_aix
5 months ago
[flagged]
sgarland
5 months ago
But definitions can and are proven false. I hate it, mind you, but I can’t ignore it. For example, the usage of “literally” as an intensifier, e.g. “I literally died of laughter.”
perching_aix
5 months ago
Logical statements can be proven true/false. Definitions are not logical statements, they do not have truth values, therefore cannot be proven neither true, nor false. These are mathematical logic basics.
drdeca
5 months ago
Yes. However, in some cases (though probably not the ones relevant here) a definition can be proven to be incoherent (or, to presuppose something false), which is vaguely similar to “being false”.
thaumasiotes
5 months ago
It would be difficult for a definition to make any presuppositions. You could have a definition that defines some set in which a contradiction is involved ("an integer is special if it is both prime and divisible by 4"), but then you'd say that the set so defined is empty, not that the definition is incoherent.
drdeca
5 months ago
It is quite common for a lemma to be needed to ensure that a definition is well-defined. The term “defi-lemma” exists for a reason.
As a simple example, suppose X is a set and r is a relation on X. If I define Y := X/r , the set of equivalence relations with respect to r, this implicitly assumes that the relation r is an equivalence relation.
Eisenstein
5 months ago
But that is their whole point -- as much as you want to make the definition something else, you can't. And this is a perfect example of that.
thaumasiotes
5 months ago
> You may notice that opinions are like assholes: everyone has theirs.
Maybe. There's a known condition in pigs that prevents them from forming one.
AdieuToLogic
5 months ago
[flagged]
perching_aix
5 months ago
Was this meant in response to what I wrote or did you mean to post this elsewhere in the thread? If the former, I'm not sure what am I supposed to do with this.
AdieuToLogic
5 months ago
> Was this meant in response to what I wrote or did you mean to post this elsewhere in the thread? If the former, I'm not sure what am I supposed to do with this.
You wrote:
You may notice that opinions are like assholes: everyone
has theirs. They're literally just "thoughts and feelings".
They may masquerade as arguments from time to time, much to
my dismay, but rest assured: there's nothing to "refute",
debate, or even dispute on them. Not in general, nor in
this specific case either.
I provided analysis supporting my position that the project maintainers most likely did not make this policy based on "literally just 'thoughts and feelings'" and, instead, made an informed policy based on experience and rational discourse.I am not a Gentoo maintainer so cannot definitively state possibility #3 is what happened. Maybe one or both of the other two possibilities is what transpired. I doubt it, but if you have evidence refuting possibility #3, please share so we may all learn.
perching_aix
5 months ago
An informed opinion is still an opinion. Voting itself is an expression of opinion, which they participated in - if it merely followed logically, it wouldn't have needed to be voted upon. Mind you, the "experience and rational discourse" is not presented, not in the policy, not in the excerpts and link you just provided.
In order to "refute" their entire position, if we accept that to even make sense (I do not), I'd need to either prove them wrong about what their opinions are (nonsense), or show evidence they were actually holding a different opinion that ran contrary to what they shared (impossible, their actual opinion is known only to them, if that). There's very little "logical payload" to their published policy, if any. It's a series of opinions, and then a conclusion. Hence my example with the person not liking a given TV show, but stating their distaste as a fact of the world.
> I doubt it, but if you have evidence refuting possibility #3, please share so we may all learn.
Why am I being rhetorically coerced into engaging with something from a false set of options of your imagination, exactly?
thaumasiotes
5 months ago
> I provided analysis supporting my position that the project maintainers most likely did not make this policy based on "literally just 'thoughts and feelings'" and, instead, made an informed policy based on experience and rational discourse.
That position would look better if they hadn't relied so heavily on feelings to justify the announcement:
>> Their operations are causing concerns about the huge use of energy and water.
>> The advertising and use of AI models has caused a significant harm to employees [which ones?] and reduction of service quality.
>> LLMs have been empowering all kinds of spam and scam efforts.
There is no experience or rational discourse involved there.
ants_everywhere
5 months ago
You're missing a very important reason
4 - There is a very active anti-LLM activist movement and they care more about participating in it than they care about free software.
For example, see their rationale, which are just canned anti-LLM activist talking points. You see the same ones repeated and memed ad nauseam if you lurk on anti-AI spaces.
AdieuToLogic
5 months ago
> You're missing a very important reason
> 4 - There is a very active anti-LLM activist movement ...
All I can say to this is that my position is Large Language Models (LLM's) are a combination of algorithms and data.
As as such, for me they do not qualify as anything to be either "pro" or "anti", let alone a participant of an activist movement.
perching_aix
5 months ago
They were not talking about LLMs being participants of anything, but people who are against LLMs in whatever capacity. Surely people can be participants of a movement.
AdieuToLogic
5 months ago
>> All I can say to this is that my position is Large Language Models (LLM's) are a combination of algorithms and data.
>> As as such, for me they do not qualify as anything to be either "pro" or "anti", let alone a participant of an activist movement.
> They were not talking about LLMs being participants of anything ...
Clearly I was referencing LLM's being something to foment "an activist movement" in an attempt to de-escalate the implication of there being some kind of "anti-LLM activist movement."
> ... but people who are against LLMs in whatever capacity. Surely people can be participants of a movement.
At this point your replies to my posts appear to be intentionally adversarial.
perching_aix
5 months ago
> Clearly I was referencing LLM's being something to foment "an activist movement" in an attempt to de-escalate the implication of there being some kind of "anti-LLM activist movement."
Well, no, that really wasn't clear to me at all. I don't think it was clear in general either.
> At this point your replies to my posts appear to be intentionally adversarial.
Not my actual intention, apologies, although I 100% understand if at this point that is not at all believable.
ants_everywhere
5 months ago
> in an attempt to de-escalate the implication of there being some kind of "anti-LLM activist movement."
Are you trying to imply that there isn't an anti-LLM activist movement? There certainly is
you can just google it
https://www.google.com/search?hl=en&q=anti%20llm%20activist%...
a subreddit dedicated to the anti-AI movement
https://www.reddit.com/r/antiai/
"what is the goal of the so-called anti-ai movement"
https://www.reddit.com/r/aiwars/comments/1mn3hh8/what_is_the...
"how do we build an anti-ai movement"
https://www.reddit.com/r/ArtistHate/comments/1hk3wuj/how_do_...
"The New Luddites Aren’t Backing Down
Activists are organizing to combat generative AI and other technologies—and reclaiming a misunderstood label in the process."
https://www.theatlantic.com/technology/archive/2024/02/new-l...
"Anti-AI Explained: Why Resistance to Artificial Intelligence Is Growing
As AI tools become more advanced, a growing chorus of critics is raising alarms about job loss, misinformation and other societal risks. Learn what’s fueling the anti-AI movement."
https://builtin.com/artificial-intelligence/anti-ai
"together against AI"
https://togetheragainstai.com/
and on and on
I invite people to become familiar with the movement and their arguments and compare them to the Gentoo rationale.
paulcole
5 months ago
How is it telling at all?
It’s just what every other tech bro on here wants to believe, that using LLM code is somehow less pure than using free-range-organic human written code.