I find the premise reasonable, though I do have an observation.
The current AI hype may have placed us in a filter bubble or echo chamber, shaping our conclusions. These highly specialized algorithms can nudge or reward us for thinking in specific ways.
Regarding programming languages, there is immense value in understanding internal primitives.
As example, consider concurrency primitives. Different languages provide different levels of abstraction: high-level library support in Python, the event loop structure in JavaScript, compiler-level implementations in Rust and C++, runtime-intrinsic mechanisms in Go and Java, and virtual machine intrinsics, such as Erlang.
By viewing languages through this lens, you recognize that each implements these primitives differently, allowing you to choose the most effective tool for the job.
If your goal is to assess the short-term economic value of a technology, your logic is understandable. However, learning new languages and tools remains worthwhile. When AI agents begin invoking these tools on the fly, you may not know if a specific choice is the most effective one. Without this knowledge, you will have some gaps to challenge the AI's decision.
In the long run, making the effort to master these concepts yields far greater value as a software engineer. It enables you to understand the rationale behind applying a precise tool to a precise task.
There are valid arguments supporting various perspectives on this. However, while any approach can be useful, this discussion highlights the need for wisdom: the awareness of one's own biases. As I noted earlier, filter bubbles can distort judgment. Continuously questioning your conclusions helps ensure you move toward the best outcomes. I hope you find this recommendation useful.