To me, it seems like LeCun is missing the point of the (many and diverse) doom arguments.
The is no need for consciousness, there is only a need for a bug. It was purely luck that Nikita Khrushchev was in New York when Thule Site J mistook the moon for a soviet attack force.
There is no need for AI to seize power, humans will promote any given AI beyond the competency of that AI just as they already do with fellow humans ("the Peter principle").
The relative number of good and bad actors — even if we could agree on what that even meant, which we can't, especially with commons issues, iterated prisoners' dilemmas, and other similar Nash equilibria — doesn't help either way when the AI isn't aligned with the user.
(You may ask what I mean by "alignment", and in this case I mean vector cosine similarity "how closely will it do what the user really wants it to do, rather than what the creator of the AI wants, or what nobody at all wants because it's buggy?")
But even then, AI compute is proportional to how much money you have, so it's not a democratic battle, it's an oligarchic battle.
And even then, reality keeps demonstrating the incorrectness of the saying "the only way to stop a bad guy with a gun is a good guy with a gun", it's much easier to harm and destroy than to heal and build.
And that's without anyone needing to reach for "consciousness in the machines" (whichever of the 40-something definitions of "consciousness" you prefer).
Likewise it doesn't need plausible-future-but-not-yet-demonstrated things like "engineering a pandemic" or "those humanoid robots in the news right now, could we use them as the entire workforce in a factory to make more of them?"