Something is afoot in the land of Qwen

766 pointsposted a day ago
by simonw

294 Comments

sosodev

a day ago

I really hope this doesn't hinder development too much. As Simon says, Qwen3.5 is very impressive.

I've been testing Qwen3.5-35B-A3B over the past couple of days and it's a very impressive model. It's the most capable agentic coding model I've tested at that size by far. I've had it writing Rust and Elixir via the Pi harness and found that it's very capable of handling well defined tasks with minimal steering from me. I tell it to write tests and it writes sane ones ensuring they pass without cheating. It handles the loop of responding to test and compiler errors while pushing towards its goal very well.

misnome

a day ago

I've been playing with 3.5:122b on a GH200 the past few days for rust/react/ts, and while it's clearly sub-Sonnet, with tight descriptions it can get small-medium tasks done OK - as well as Sonnet if the scope is small.

The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked, and I find it has stripped all the preliminary support infrastructure for the new feature out of the code.

sheepscreek

a day ago

That sounds awfully similar to what Opus 4.6 does on my tasks sometimes.

> Blah blah blah (second guesses its own reasoning half a dozen times then goes). Actually, it would be a simpler to just ...

Specifically on Antigravity, I've noticed it doing that trying to "save time" to stay within some artificial deadline.

It might have something to do with the system messages and the reinforcement/realignment messages that are interwoven into the context (but never displayed to end-users) to keep the agents on task.

jtonz

21 hours ago

As someone that started using Co-work, I feel like I am going insane with the frequency that I have to keep telling it to stay on task.

If you ask it to do something laborious like review a bunch of websites for specific content it will constantly give up, providing you information on how you can continue the process yourself to save time. Its maddening.

zzrrt

21 hours ago

That’s pretty funny when compared with the rhetoric like “AI doesn’t get tired like humans.” No, it doesn’t, but it roleplays like it does. I guess there is too much reference to human concerns like fatigue and saving effort in the training.

martin-t

20 hours ago

This is what happens when a bunch of billionaires convince people autocomplete is AI.

Don't get me wrong, it's very good autocomplete and if you run it in a loop with good tooling around it, you can get interesting, even useful results. But by its nature it is still autocomplete and it always just predicts text. Specifically, text which is usually about humans and/or by humans.

selcuka

18 hours ago

You are not wrong, but after having started working with LLMs, I have this feeling that many humans are simply autocomplete engines too. So LLMs might be actually close to AGI, if you define "general" as "more than 50% of the population".

goodmythical

4 hours ago

Humans are absolutely auto-complete engines, and regularly produce incorrect statements and actions with full confidence in it being precisely correct.

Just think about how many thousands of times you've heard "good morning" after noon both with and without the subsequent "or I guess I should say good afternoon" auto-correct.

jrumbut

18 hours ago

Well the essence of software engineering is taking this complex real world tasks and breaking them down into simpler parts until they can be done by simple (conceptually) digital circuits.

So it's not surprising that eventually autocomplete can reach up from those circuits and take on some tasks that have already been made simple enough.

I think what's so interesting is how uneven that reach is. Some tasks it is better than at least 90% of devs and maybe even superhuman (which, in this case, I mean better than any single human. I've never seen an LLM do something that a small team couldn't do better if given a reasonable amount of time). Other cases actual old school autocomplete might do a better job, the extra capabilities added up to negative value and its presence was a distraction.

Sometimes there is an obvious reason why (solving a problem with lots of example solution online vs working with poorly documented proprietary technologies), but other times there isn't. They certainly have raised the floor somewhat, but the peaks and valleys remain enormous which is interesting.

To me that implies there is both lots of untapped potential and challenges the LLM developers have not even begun to face.

root_axis

19 hours ago

Yep. The veil of coherence extends convincingly far by means of absurd statistical power, but the artifacts of next token prediction become far more obvious when you're running models that can work on commodity hardware

justinclift

11 hours ago

> As someone that started using Co-work, I feel like I am going insane with the frequency that I have to keep telling it to stay on task.

Used to have the same thing happening when using Sonnet or Opus via Windsurf.

After switching to Claude Code directly though (and using "/plan" mode), this isn't a thing any more.

So, I reckon the problem is in some of these UI/things, and probably isn't in the models they're sending the data to. Windsurf for example, which we no longer use due to the inferior results.

shinycode

13 hours ago

If found it better to split in smaller tasks from a first overall analysis and make it do only that subtask and make it give me the next prompt once finished (or feed that to a system of agents). There is a real threshold from where quality would be lost.

bandrami

20 hours ago

It really is like having an intern, then

throwup238

21 hours ago

In my experience all of the models do that. It's one of the most infuriating things about using them, especially when I spend hours putting together a massive spec/implementation plan and then have to sit there babysitting it going "are you sure phase 1 is done?" and "continue to phase 2"

I tend to work on things where there is a massive amount of code to write but once the architecture is laid down, it's just mechanical work, so this behavior is particularly frustrating.

dripdry45

15 hours ago

I hope you will excuse my ignorance on this subject, so as a learning question for me: is it possible to add what you put there as an absolute condition, that all available functions and data are present as an overarching mandate, and it’s simply plug and chug?

elcritch

13 hours ago

Recently it seems that even if you add those conditions the LLMs will tend to ignore them. So you have to repeatedly prompt them. Sometimes string or emphatic language will help them keep it “in mind”.

girvo

13 hours ago

Glad it's not just me then, it's been driving me slightly batty.

beepbooptheory

15 hours ago

Why keep using it then? I simply still read websites. It's not always great but sounds better than whatever that weird dynamic is!

wood_spirit

a day ago

Yeah that happened to me with Claude code opus 4.6 1M for the first time today. I had to check the model hadn’t changed. It was weird. I was imagining that maybe anthropic have a way of deciding how much resource a user actually gets and they had downgraded me suddenly or something.

e1g

a day ago

Claude Code recently downgraded the default thinking level to “medium”, so it’s worth checking your settings.

nekitamo

a day ago

Thank you. The difference was quite noticeable today.

wood_spirit

14 hours ago

Thank you thank you you give me hope :)

But how do you see the current thinking level and how do you change it? I’ve been clicking around and searching and adding “effortLevel”:”high” to .claude/settings.json but no idea if this actually has any effect etc.

varshar

12 hours ago

As per Anthropic support (for Mac and Linux respectively) -

  $ echo 'export ANTHROPIC_EFFORT="high"' >> ~/.zshrc source ~/.zshrc
  $ echo 'export ANTHROPIC_EFFORT="high"' >> ~/.bashrc source ~/.bashrc
I prefer settings.json (VSCode) -

  "claudeCode.environmentVariables": [
    { "name": "ANTHROPIC_MODEL", "value": "claude-opus-4-6" },
    { "name": "CLAUDE_CODE_EFFORT_LEVEL", "value": "high" }
  ], ...

nnoremap

7 hours ago

Or the 2026 version: 'Hey Claude set your thinking level to high.'

jasonjmcghee

6 hours ago

I've found antigravity to be completely unusable.

It's amazing how much foundational prompting and harness matters.

mavamaarten

4 hours ago

Haha yeah I've had this happen to me too (inside copilot on GitHub). I ask it to make a field nullable, and give it some pointers on how to implement that change.

It just decided halfway that, nah, removing the field altogether means you don't have to fix the fallout from making that thing nullable.

Lmao.

varispeed

4 hours ago

Opus 4.6 found in my documentation how to flash the device and wanted to be clever and helpful and flash it for me after doing series of fixes. I've got used to approving commands and missed that one. So it bricked it. Then I wrote extra instructions saying flashing of any kind is forbidden. Few days later it did again and apologised...

storus

a day ago

> to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked

That's likely coming from the 3:1 ratio of linear to quadratic attention usage. The latest DeepSeek also suffers from it which the original R1 never exhibited.

nl

15 hours ago

There is no way you can diagnose this like that. Correlation isn't causation and much more likely is a common source of reinforcement training data.

shaan7

a day ago

> that it would be "simpler" to just... not do what I asked

That sounds too close to what I feel on some days xD

reactordev

a day ago

Turn down the temperature and you’ll see less “simpler” short cuts.

smokel

a day ago

For the uninitiated: Interestingly, it is not advisable to take this to the extreme and set temperature to 0.

That would seem logical, as the results are then completely deterministic, but it turns out that a suboptimal token may result in a better answer in the long run. Also, allowing for a little bit of noise gives the model room to talk itself out of a suboptimal path.

mejutoco

7 hours ago

Setting the temperature to zero does not make the llm fully deterministic, although it is close.

LoganDark

a day ago

I like to think of this like tempering the output space. With a temperature of zero, there is only one possible output and it may be completely wrong. With even a low temperature, you drastically increase the chances that the output space contains a correct answer, through containing multiple responses rather than only one.

I wonder if determinism will be less harmful to diffusion models because they perform multiple iterations over the response rather than having only a single shot at each position that lacks lookahead. I'm looking forward to finding out and have been playing with a diffusion model locally for a few days.

reactordev

a day ago

Yup. I think of it as how off the rails do you want to explore?

For creative things or exploratory reasoning, a temperature of 0.8 lends us to all sorts of excursions down the rabbit hole. However, when coding and needing something precise, a temperature of 0.2 is what I use. If I don’t like the output, I’ll rephrase or add context.

slices

a day ago

I've seen behavior like that when the model wasn't being served with sufficiently sized context window

Aurornis

a day ago

> The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked,

This is my experience with the Qwen3-Next and Qwen3.5 models, too.

I can prompt with strict instructions saying "** DO NOT..." and it follows them for a few iterations. Then it has a realization that it would be simpler to just do the thing I told it not to do, which leads it to the dead end I was trying to avoid.

soulofmischief

10 hours ago

Claude Opus does this constantly for me, no matter how I prompt it or what is in my AGENTS.md, etc. It is the bane of my existence.

abhikul0

a day ago

Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69...

sosodev

19 hours ago

I’ve been running it via llama-server with no issues. Running the latest Bartowski 6-bit quant

brightball

18 hours ago

Bartowski? Like Chuck Bartowski from the TV show?

BoredomIsFun

13 hours ago

Different one. Bartowski is a minor celebrity in the local LLM world, together with Unsloth.

Balinares

4 hours ago

What's the selling point of these quants vs the Unsloth ones?

abhikul0

16 hours ago

Thanks, i'll check his quants.

Have you tried the '--jinja' flag in llama-server?

abhikul0

a day ago

Yes, it fails too. I’m using the unsloth q4_km quant. Similarly fails with devstral2 small too, fixed that by using a similar template i found for it. Maybe it’s the quants that are broken, need to redownload I guess.

Twirrim

a day ago

I've been testing the same with some rust, and it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself. It seems a little more likely to jam up than some other models I've experimented with.

It's also driving itself crazy with deadpool & deadpool-r2d2 that it chose during planning phase.

That said, it does seem to be doing a very good job in general, the code it has created is mostly sane other than this fuss over the database layer, which I suspect I'll have to intervene on. It's certainly doing a better job than other models I'm able to self-host so far.

Aurornis

a day ago

> it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself.

I think this is part of the model’s success. It’s cheap enough that we’re all willing to let it run for extremely long times. It takes advantage of that by being tenacious. In my experience it will just keep trying things relentlessly until eventually something works.

The downside is that it’s more likely to arrive at a solution that solves the problem I asked but does it in a terribly hacky way. It reminds me of some of the junior devs I’ve worked with who trial and error their way into tests passing.

I frequently have to reset it and start it over with extra guidance. It’s not going to be touching any of my serious projects for these reasons but it’s fun to play with on the side.

sosodev

a day ago

Some of the early quants had issues with tool calling and looping. So you might want to check that you're running the latest version / recommended settings.

misnome

a day ago

> and it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself

I can live with this on my own hardware. Where Opus4.6 has developed this tendency to where it will happily chew through the entire 5-hour allowance on the first instruction going in endless circles. I’ve stopped using it for anything except the extreme planning now.

cbm-vic-20

a day ago

I don't know much about how these models are trained, but is this behavior intentional (ie, the people pulling the levers knew that this is how it would end up), or is it emergent (ie, pulling the levers to see what happens)?

anana_

a day ago

I've had even better results using the dense 27B model -- less looping and churning on problems

androiddrew

21 hours ago

Which dense model are you referring to? The dense model isn’t finetuned for code instruction according to the model card.

nu11ptr

a day ago

What hardware do you have it running on? Do you feel you could replace the frontier models with it for everyday coding? Would/will you?

sosodev

a day ago

Around 20ish tokens a second with 6-bit quant at very long context lengths on my AMD AI Max 395+

I’m trying to use local models whenever possible. Still need to lean on the frontier models sometimes.

politelemon

a day ago

60 to 70 on a 5080, but only tinkering for now. The smaller models seem exceptionally good for what they are, and some can even do OCR reliably.

bigyabai

a day ago

I'm getting ~30 tok/s on the A3B model with my 3070 Ti and 32k context.

> Do you feel you could replace the frontier models with it for everyday coding? Would/will you?

Probably not yet, but it's really good at composing shell commands. For scripting or one-liner generation, the A3B is really good. The web development skills are markedly better than Qwen's prior models in this parameter range, too.

jasonjmcghee

6 hours ago

That seems oddly low / slower by a fair amount than i get on my m4. (I believe it was ~45 tok/s?)

What quant are you using? How much ram does it have?

paoliniluis

a day ago

what's your take between Qwen3.5-35B-A3B and Qwen3-Coder-Next?

sosodev

a day ago

In my experience Qwen3.5 is better even at smaller distillations. From what I understand the Qwen3-next series of models was just a test/preview of the architectural changes underpinning Qwen3.5. So Qwen3.5 is a more complete and well trained version of those models.

kamranjon

a day ago

In my experience qwen 3 coder next is better. I ran quite a few tests yesterday and it was much better at utilizing tool calls properly and understanding complex code. For its size though 3.5 35B was very impressive. coder next is an 80b model so i think its just a size thing - also for whatever reason coder next is faster on my machine. Only model that is competitive in speed is GLM 4.7 flash

xrd

a day ago

What do you use as the orchestrator? By this I mean opencode, or the like. Is that the right term?

simonw

a day ago

I use the term "harness" for those - or just "coding agent". I think orchestrator is more appropriate for systems that try to coordinate multiple agents running at the same time.

This terminology is still very much undefined though, so my version may not be the winning definition.

kamranjon

a day ago

I'm basically using the agentic features of the Zed editor: https://zed.dev/agentic

It's really easy to setup with any OpenAI compatible API and I self host Qwen Coder 3 Next on my personal MBP using LM Studio and just dial in from my work laptop with Zed and tailscale so i can connect from wherever i might be. It's able to do all sorts of things like run linting checks and tests and look for issues and refactor code and create files and things like this. I'm definitely still learning, but it's a pretty exciting jump from just talking to a chat bot and copying and pasting things manually.

nvader

a day ago

Another vote in favour of "harness".

I'm aligning on Agent for the combination of harness + model + context history (so after you fork an agent you now have two distinct agents)

And orchestrator means the system to run multiple agents together.

Zetaphor

15 hours ago

This has also been my understanding of all of these terms so far

nekitamo

13 hours ago

In my tests, Qwen3.5-35B-A3B is better, there is no comparison. Better tool calling and reasoning than Qwen3-Coder-Next for Html/Js coding tasks of medium size. Beware the quants and llama.cpp settings, they matter a lot and you have to try out a bunch of different quants to find one with acceptable settings, depending on your hardware.

a3b_unknown

a day ago

What is the meaning of 'A3B'?

simonw

a day ago

It's the number of active parameters for a Mixture of Experts (misleading name IMO) model.

Qwen3.5-35B-A3B means that the model itself consists of 35 billion floating point numbers - very roughly 35GB of data - which are all loaded into memory at once.

But... on any given pass through the model weights only 3 billion of those parameters are "active" aka have matrix arithmetic applied against them.

This speeds up inference considerably because the computer has to do less operations for each token that is processed. It still needs the full amount of memory though as the 3B active it uses are likely different on every iteration.

zozbot234

a day ago

It will benefit from a full amount of memory for sure, but AIUI if you use system memory and mmap for your experts you can execute the model with only enough memory for the active parameters, it's just unbearably slow since it has to swap in new experts for every token. So the more memory you have in excess to that, the more inactive but often-used experts can be kept in RAM for better performance.

EnPissant

a day ago

The ability to stream weights from disk has nothing to do with MoE or not. You can always do this. It will be unusable either way.

zozbot234

a day ago

Agreed but for a dense model you'd have to stream the whole model for every token, whereas with MoE there's at least the possibility that some experts may be "cold" for any given request and not be streamed in or cached. This will probably become more likely as models get even sparser. (The "it's unusable" judgmemt is correct if you're considering close-to-minimum reauirements, but for just getting a model to fit, caching "almost all of it" in RAM may be an excellent choice.)

EnPissant

18 hours ago

Unlike offloading weights from VRAM to system RAM, I just can't see a situation where you would want to offload to an SSD. The difference is just too large, and any model so large you can't run it in system RAM, is going to be so large it is probably unusable except in VRAM.

zozbot234

8 hours ago

Unusable for anything like realtime response, yes. Might be usable and even quite sensible to power less-than-realtime uses on much cheaper inference platforms, as long as the slow storage bandwidth doesn't overly bottleneck compute.

whalesalad

a day ago

What hardware are you running this on?

Zetaphor

15 hours ago

I'm running this exact same setup on a Framework Desktop and I'm seeing ~30 tokens/second

hintymad

a day ago

There has been tension between Qwen's research team and Alibaba's product team, say the Qwen App. And recently, Alibaba tried to impose DAU as a KPI. It's understandable that a company like Alibaba would force a change of product strategy for any number of reasons. What puzzled me is why they would push out the key members of their research team. Didn't the industry have a shortage of model researchers and builders?

cmrdporcupine

a day ago

Perhaps they wanted future Qwen models to be closed and proprietary, and the authors couldn't abide by that.

gdiamos

11 hours ago

Results as good as Qwen has been posting would seem to trigger a power struggle.

I think companies that don’t navigate these correctly eventually lose.

lzaborowski

a day ago

One thing I’ve noticed with local models is that people tolerate a lot more trial and error behavior. When a hosted model wastes tokens it feels expensive, but when a local model loops a bit it just feels like it’s “thinking.”

If models like Qwen can get good enough for coding tasks locally, the real shift might be economic rather than purely capability.

trvz

a day ago

Wasted tokens are preferred for local models, I need the GPU mainframe in my bedroom to heat it as I live in a third world country with unreliable heating (Switzerland).

softwaredoug

a day ago

I wonder how a US lab hasn't dumped truckloads of cash into various laps to ensure these researchers have a place at their lab

gaoshan

a day ago

ICE has been detaining Chinese people in my area (and going door to door in at least one neighborhood where a lot of Chinese and Indians live). I was hearing about this just last week as word spread amongst the Chinese community here (Ohio) to make sure you have some legal documentation beyond just your driver's license on you at all times for protection. People will hear about this through the grapevine and it has a massive (and rightly so) chilling effect. US labs can try but with US government behaving like it is I don't think they will have much luck.

*edit: not that it matters, but since MAGA can't help but assume, these are all US citizens and green card holders that I am referring to.

bobthepanda

a day ago

Yeah, the Hyundai factory fiasco kind of dashed the idea that the enforcement would spare people working in favored industries setting up in the US.

genxy

a day ago

The Hyundai factory "enforcement" wasn't even legal. Those workers were here to train US workers and the Hyundai employees had proper visas for this.

https://apnews.com/article/immigration-raid-hyundai-korea-ic...

https://www.koreatimes.co.kr/foreignaffairs/20251112/hundred...

https://www.pbs.org/newshour/nation/attorney-says-detained-k...

The regime is powered by racism and doesn't think through things.

limagnolia

21 hours ago

Allegedly, though the local labor unions seem to disagree. I guess we'll have to wait for the facts to come out in court.

jjcc

5 hours ago

There are 2 groups of new Chinese immigrants in the US, they are quite different:

1.Those who arrived through legal channels (most studied at U.S. universities and remained on H1B visas, with a smaller number through EB5 or other visa categories) and eventualy got green card.

2.Undocumented immigrants, which include several sub-groups/waves. In the 1990s, most came from just a couple provinces, Fujian and southern Zhejiang. After COVID, they were from different parts of China and entered through the southern border.

The contributors to AI development belong to the 1st group. They are spread across the country but a large number work in high-tech companies in Northern California.

The 2nd group was intially concentrated in New York and Southern California (Los Angeles area). Later they have expanded into nearby regions. They provide labor for Chinese-owned small businesses such as restaurants, grocery stores, and hotels.

There is an industry created largely by Chinese political dissidents helping Group 2 through asylum applications using fake materials and exploiting common Western beliefs or narratives about China like human rights concerns. For example, Alysa Liu’s father is an asylum lawyer.

ICE enforcement efforts would likely focus more on Group 2 if they are knowledgable. Ohio should not be a high-priority area. I could be wrong due to changes over time. One indicator you can observe: Are there many Chinese-owned small businesses in your area?

gaoshan

2 hours ago

They were operating in a traditional "China town" neighborhood for the detentions and the neighborhood they were going door to door in is one populated mostly by white collar professionals (tech, college professors, etc.).

hedora

3 hours ago

ICE arrests citizens and legal immigrants on a regular basis. Only 5% of the people they arrested had an immigration related conviction on their record:

https://www.cato.org/blog/5-ice-detainees-have-violent-convi...

The Bay Area is mostly exempt for now because, after Trump announced ICE was going to surge in SF, a bunch of tech billionaires with economic interests in the region convinced him not to.

Also, over the last year, there have been a bunch of high-profile arrests of Ohioans by ICE. In one example, they arrested someone for showing up to their immigration hearing, leaving their young kid separated from them outside the court.

jiggawatts

a day ago

"Papers, please." comes to the US of A.

nomel

20 hours ago

Every administration since the foundation of ICE has removed illegal immigrants and funded ICE and immigration policy/border operations [1].

[1] Removals by president: https://www.migrationpolicy.org/article/biden-deportation-re...

littlestymaar

6 hours ago

Every developed country on earth has an immigration policy and an administration dedicated to enforcing it.

No other developed countries have masked goons abducting people in public wearing civil clothes and masks and disregarding every laws of the country (violating private property and foreign embassies, deporting national citizens, and numerous other preposterous bullshit).

Immigration policy enforcement is normal, the madness that has been running in the US for a year isn't.

fragmede

10 hours ago

Yes, but the difference in degree, and how are material. The big showy Hyundai plant raid is an example of something that hasn't happened before. Under Obama there were I-9 audits of Infosys, and under Clinton there was a raid of a Filiberto's, but both of those weren't of foreign workers here to train Americans.

the_gipsy

7 hours ago

Yes what's happening is totally normal, everybody does it all the time.

coredog64

2 hours ago

"Papers, please" is not just ID, but also authorization to travel internally. The idea that asking for ID* is anywhere equivalent is asinine.

*Reminder that folks visiting the US on a visa are legally required by the terms of said visa to always carry upon their person at least a copy of identity papers backing up that visa, and that this law has been in place for a very long time.

gunsle

18 hours ago

You have to show ID to pick up a prescription or open a bank account. You have to show ID for routine traffic stops. This is such a juvenile, tired argument.

baby_souffle

18 hours ago

“You show ID at the bank” is a classic, juvenile and tired argument because it swaps in a voluntary transaction for state coercion.

The concern isn’t IDs exist—it’s who’s demanding them, in what context, and what happens if you can’t comply on the spot.

jiggawatts

17 hours ago

Not to mention that the USA has a long history of looking down their nose at the USSR for doing exactly what the USA is doing now.

I forgot that HN is mostly filled with a younger generation that might not get the reference.

The "Papers, please." quote is a common trope in spy movies, books, etc... about the former Soviet Union.

binarycrusader

16 hours ago

You absolutely do not have to show ID to pick up every prescription; just some, which is also dependent on state law, federal law, and pharmacy.

But also, I don't care if it's a tired argument--this isn't about how things are, it's about how we want them to be. I don't want to live in a state action-coerced society.

z3t4

8 hours ago

In Sweden you have to use your Bank ID to take the buss. Meaning the bank has the same security as entering the bus. If you get robbed in Sweden they take your bank ID, use it to take a loan and then transfers all your money to a foreign bank. Because we got rid of manual cash.

DiogenesKynikos

10 hours ago

Imagine if you forgot to bring your ID card to the bank, and they grabbed you and the next thing you knew, you were in a concentration camp in El Salvador.

sourcegrift

a day ago

Yes. Yes, so true. And the phd types building these models are probably even scared in China that ICE will fly there to deport them.

jwolfe

a day ago

This thread is about bringing these people to the US.

swiftcoder

12 hours ago

There's no huge reason to bring them to the US. Plenty of US corporations have maintain overseas offices. Even if its impolitic to employ them directly in China, you can employ them in other offices (for example, Amazon has been known to do this with their Singapore offices)

user34283

10 hours ago

This thread is largely pointless political back-and-forth were predictably the comments with a more positive opinion on current US immigration enforcement will be flagged.

To get back to the original point, personally I doubt sentiment on US immigration enforcement would be so significant a deterrent for Chinese talent, who may not share the political views of the American left for whom this is a big concern.

Matl

7 hours ago

> pointless political back-and-forth were predictably the comments with a more positive opinion on current US immigration enforcement will be flagged

Given the tactics employed by ICE, it's a true shock and horror that most people have more humanity than that.

But I guess a person who can't form a grammatically correct sentence is an example of the sort of people who can rest easy,

velcrovan

a day ago

What the US has done is dumped truckloads of cash to make it likely that as a legal immigrant you will be abducted and sent to a camp.

seanmcdirmid

a day ago

They already kind of do, but I think anyone who was into US money has already left for it, and the money China is throwing at the problem is pretty good also. You can also have a lot more influence in a Chinese company without having to adopt a weird new American corporate culture.

mft_

a day ago

Indeed; or, Europe badly needs a competitive model to hedge against US political nonsense.

ivan_gammel

a day ago

Offering „You are welcome“ relocation package to Anthropic might be a good idea.

cmrdporcupine

a day ago

Anthropic has gone out of their way to make a point about how much they love and admire the US state and its defense sector. Only drawing the line at a very far point and even when they drew the line it was with a big thing about how they believe in the American defense sector blah blah blah.

In any case, there's no way Anthropic's investors in Silicon Valley would countenance such a move.

Also, I'm biased the logical place is Canada, not Europe. Much of the fundamental/foundational research on LLMs, and a large part of the talent, came from universities in Canada anyways.

vasco

14 hours ago

Look up how many of the main people are Polish.

Given how American govt. has treated Anthropic, I think you might be right. EU truly has a remarkable opportunity to make Anthropic/Claude European.

petcat

a day ago

This US administration (or any admin) would almost certainly impose export controls on US AI technology before it would allow one of the frontier model providers to be acquired/relocate outside the US. It did the same thing when ASML wanted to acquire Cymer (California company that provides the EUV light source technology). The acquisition was only allowed under strict technology sharing/export agreements with the Dutch government.

Europe really just needs to rally behind Mistral. That's where they should dump their cash.

kelvinjps10

2 hours ago

But can they restrict the researchers from leaving? What if they are just offered regular jobs by a new European company? What the US has done is dictatorial, but this would be even worse if they try to do it.

fc417fc802

a day ago

Can they actually prevent it though? In typical cases there would be IP licenses involved. But in this case it's a valuation based (AFAICT) on a team of people plus their infra. What happens if they all just happened to get hired by "AnthropicEU GmbH" a new entity which has been gifted hundreds of millions in computing resources?

ivan_gammel

a day ago

Having one „champion“ is flawed European approach. We need local competition and headhunting to make it fly.

azinman2

a day ago

Hard to compete in an environment that’s anti-996 and the pay is so much less.

dvdkon

9 hours ago

Do you consider high working hours to be a benefit akin to higher pay? I think fewer hours and less money is a fair deal for employees.

azinman2

an hour ago

Fair deal for the employees but hard to compete against smart, well resourced people who are working 996. Everything with AI is moving so fast that moving slower makes you irrelevant.

I'm not sure goals are totally aligned though. The current models are created by enormous expense. We know that many stages are done incorrectly. I am confident that they can be replicated without any unique US knowledge.

At the moment my impression is instead that the issue is computational resources. It's important to stay near the frontier though, and to build up ones capacity to train large models.

Consequently I don't think we need Anthropic. It wouldn't be terrible if they came. Especially if they picked a nice location. Barcelona would be very nice, for example.

ekianjo

21 hours ago

There is no capital in the EU

rcbdev

7 hours ago

It's all tied up in real estate and horribly inefficient savings accounts.

lejalv

a day ago

Given what Amodei thinks of spying non-US citizens, that's a hard pass from me. If you are that loyal (servile) to your country leaders, don't go elsewhere when you "discover" they are thugs. Put up with it or revolt (as Iranians are being asked to do).

mijoharas

a day ago

It'd be great if they went to Mistral!

tiahura

a day ago

Competitive models are illegal in the EU.

ecshafer

a day ago

China is also giving them dump trucks full of cash though. Plus you have to content with the nationalism reason (unfortunately this has died off in America for too many). The idea of building your country is valued for most Chinese I have met. Plus China is incredibly nice to live in, especially if you have lots of money and/or connections. So you can work in China, get paid lots of money, feel like you are doing good. Or In America you can get paid lots of money, and get yelled at by people online because the Government wants to use your model.

danny_codes

a day ago

China city life is amazingly convenient. Trains and subways are just such an enormous quality of life boost. Add to that the relative cleanliness of having nearly zero homelessness and you’ve got something very compelling.

I will say we are winning in accessibility. China doesn’t have much of a ramp game

softwaredoug

a day ago

All very true.

I wonder if you max out your options in China. It seems the Party is suspicious of ambition and high profile winners. I'm sure you can live comfortably, but there's a ceiling.

danny_codes

a day ago

That’s not relevant to normal people. If you’re a billionaire with aspirations of power then it’s probably good there’s a ceiling. Sure beats having Elon randomly firing your public servants while high on ketamine.

bdangubic

a day ago

what is the issue with having a ceiling?

WarmWash

a day ago

Star athletes really hate being told they can't score more than 10 goals in a season because it's unfair to the other weaker players. The players will either leave to go play somewhere else, or they become weaker players themselves.

vuurmot

18 hours ago

how about explaining why without giving an analogy?

realharo

8 hours ago

And yet almost all of the most popular sports leagues in the US have a salary cap rule.

kelipso

a day ago

Why would a country want to welcome a psychopath whose goal is to make lots of money and wield political power that results from the money. I'm sure they would be happier with just as psychopathic people who make a bit less money but don't have aspirations of running the country from their secret bunker.

bdangubic

a day ago

wowsa - wasn't expecting star athletes and sports to enter this conversation... wild!

1024core

a day ago

I got an offer out of the blue for a consulting gig in ML, offering USD 400/hr in China. Assuming this was legit (the offeror seemed legit), it looks like China is also throwing a lot of Benjamins around...

petcat

a day ago

> Or In America you can get paid lots of money, and get yelled at by people online because the Government wants to use your model.

Isn't it just straight-up illegal in China to refuse the government from using your model? USA isn't perfect, but at least it has active discourse.

vintermann

15 hours ago

Probably. There are different kinds of political power though; it seems the qwen architects are using one right now.

The real political power we have through our vote is probably smaller than the political power most of us here have from the option to quit.

ecshafer

a day ago

I would imagine if it isn't illegal its a very bad idea not to. But regardless, I would bet large amounts of money that you would never get any flack for doing anything for the government. If I went on X, Threads, Bluesky, TikTok and said "Hey I am a software engineer selling awesome new technology to the government and military!" I am going to get Americans attacking me for supporting Trump / ICE / FBI whatever the current issue of the day is. If I did the same on Douyin or Weibo the response would be able making China strong, and there would be no criticism of that choice.

MarsIronPI

3 hours ago

> If I did the same on Douyin or Weibo the response would be able making China strong, and there would be no criticism of that choice.

Right, because in China if you were criticized for helping the government, the people who criticized you would be in for (probably life-damaging) trouble.

cmrdporcupine

a day ago

Sure, but the difference is that while the Chinese state is measurably awful on all sorts of human rights things within their own borders... they're not currently dropping bombs on foreign cities, starving a neighbour of critical petroleum shipments, or heavily funding an ally to slowly exterminate a population.

fc417fc802

a day ago

What point are you trying to make here? Are government abuses somehow inherently better or worse depending on where they happen?

Do you imagine an invasion of Taiwan won't involve dropping bombs?

I feel like we should be able to agree that providing authoritarian regimes with high tech tools is immoral in the general case.

cmrdporcupine

a day ago

My point is as a non-American I feel no allegiance to either state, and current events don't make me sympathetic to the geo-political aims of the USA. So I don't see a strong moral case for this tech being an especial purvey of either party.

If you'd asked me two years ago my answer might have been different.

And to the original point, yeah, I would feel entirely justified in the critique of engineers in providing tools to the US defense apparatus at this point.

At least the Chinese shops are giving their weights away for free, and not demanding that any government ban the rest.

sciencejerk

15 hours ago

Why are they giving them away for free?

fc417fc802

12 hours ago

Why did Meta release theirs? The better question is, why not? If you aren't at the cutting edge and don't have a moat then releasing them is pure reputational upside with zero downside.

sciencejerk

3 hours ago

The research costs are not free. The businesses need to recoup the cost in some way shape or form, even if in the long term. Seems expensive as an anti-moat to detrench competitors

fc417fc802

14 minutes ago

It's a winner take almost all competition. Baring a moat, if you aren't at the cutting edge you won't be able to recoup the cost regardless. At that point you might as well release it to the public for reputation.

You might even get lucky and someone else does the same. If you manage to learn from their example you might be more competitive in a future round.

neves

a day ago

At least it has been decades since China Gov bombed innocent people in other countries. A peaceful and responsible government.

petcat

a day ago

> A peaceful and responsible government.

People in Hong Kong died. Over 10,000 were arrested and many are still in prison. The rest are permanently disgraced in their social-credit society.

Again, USA is not perfect, but let's not dream up some fantasy about the CCP.

throw5t432

15 hours ago

> People in Hong Kong died

Do you have a legit source for this? When I search for information, I only found this case, “Luo Changqing, a 70-year-old Hong Kong cleaner, died from head injuries sustained after he was hit by a brick thrown by a Hong Kong protester during a violent confrontation between two groups in Sheung Shui, Hong Kong on 13 November 2019.”

None of the other legit sources claim the police killed any of the rioters.

MarsIronPI

3 hours ago

Sure, sure. The CCP probably has gotten a lot better since Tienanmen Square.

cyberax

a day ago

This "social credit" thing is dead in China.

petcat

a day ago

As an American, I have no fear of calling the US President a pedo or saying Fuck the Police on my Twitter. Not the case in China. It's horrifying.

https://reclaimthenet.org/china-man-chair-interrogation-soci...

Barrin92

a day ago

> I have no fear of calling the US President a pedo or saying Fuck the Police on my Twitter.

Does that matter? In China people don't judge the state of their civilization by how easily you can insult the police but whether you need to be afraid to meet them on the street. "I can insult my pedophile president" (who doesn't care if you do) isn't exactly a flex.

It does tell us something though that the evaluation of American life now consists of parasocial interactions with the president on social media. I'm starting to belief Bruno Maçães, ex Portuguese secretary of state, was prescient with his diagnosis that American material society has rotted to the point where life is now entirely defined by virtual interactions. That's the difference between China and the US today.

The president's a pedophile, a criminal, undeterred by democracy, economy or social disorder but you can freely yell into the void. Have you considered that in the US one can freely say all these things precisely because that's irrelevant?

petcat

a day ago

> The president's a pedophile, a criminal, undeterred by democracy, economy or social disorder but you can freely yell into the void. Have you considered that in the US one can freely say all these things precisely because that's irrelevant?

Americans will vote for their Congress representatives in November. They will have a chance to decide how they want their government to be run. The US President was already shot-down once by the Supreme Court (tariffs). The system is working. Let the voters decide, and then let it work.

vintermann

15 hours ago

> They will have a chance to decide how they want their government to be run.

That depends on what's on the ballot.

> The system is working.

If it is, how did you end up here again?

cyberax

a day ago

Oh, China absolutely does not tolerate _public_ dissent very much including highly visible social media posts. Everybody there knows that.

But this:

> According to the social credit system, Chinese citizens are punishable if they indulge in buying too many video games, buying too much junk food, having a friend online who has a low credit score, visiting unauthorized websites, posting “fake news” online, and more.

...is just pure bullshit. There were _ideas_ about including these kinds of stuff into the score, but they have never been implemented. At this point, the social credit score is only used to find people who dodge court decisions.

fc417fc802

a day ago

"At this point" being the key phrase.

kelipso

a day ago

A key phrase that can be used to speculate about whatever bs one can think of.

fc417fc802

a day ago

A low effort and bad faith rebuttal on your part.

Please ignore the gun pointed at your head / social credit score / masked goons roving about Minnesota / flock cameras / etc as it hasn't been used against you at this point.

phatfish

20 hours ago

It seems to me your argument is in bad faith because (taking the parents analysis at face value) you created a straw man "social credit score" that doesn't exist. But there ARE masked goons roving Minnesota.

fc417fc802

19 hours ago

I did no such thing - you are the one creating a straw man. The comment chain I responded to has several different parties making various claims about the social credit score. My comments are consistent with those I responded to.

If you wish to dispute the veracity of one or more comments in the thread, by all means do so. But please make a substantive argument and (given the nature of the topic) cite sources.

strangegecko

21 hours ago

Constant military drills around Taiwan isn't peaceful or responsible.

China is bullying lots of countries in the SCS (ramming Philippine coast guard ships, building military installations in the SCS, ...). Not peaceful or responsible.

phatfish

20 hours ago

If China was serious about a military solution for Taiwan they would be invading right now while the US is unloading into the desert.

MarsIronPI

3 hours ago

The US would get out of the desert and to Taiwan fast if that happened.

throw5t432

15 hours ago

> building military installations in the SCS

Many countries in the SCS are doing this. In fact China was late to the game, as Vietnam did it much earlier.

maxglute

20 hours ago

AKA defending itself against separatists and sovereignty intrusions from much less powerful aggressors with unreasonable amount of restraint. One would argue overly peaceful, and irresponsible to the point of detrimental peace disease. BTW PRC settled most border disputes in recorded history with most concessions, majority over 50%, that objectively makes PRC the most peaceful rising power in recent history. Even in SCS PRC was second last to militarize, the other disputees started land reclamations and militarization first (apart from Brunei), aka a fucked around and find out situation. Even then all PRC did was build a bigger island, instead of glassing theirs, PRC coast guard last to weaponize as well.

WarmWash

a day ago

What's ironic is that China is desperately trying to be that country, but the US has then in a geographic/geopolitical choke hold.

VWWHFSfQ

a day ago

> China is incredibly nice to live in

I'm sure it's a very nice place to live if you're content to just stay quiet in society and never put a political sign in your yard or even just talk about the wrong thing with your friend in a WeChat.

eunos

11 hours ago

> never put a political sign in your yard or even just talk about the wrong thing with your friend in a WeChat.

Practically, how many care about that? Consider that in other part of the world they also cancel folks based on social media opinion...

and that Benjamin Franklin's opinion on security and freedom? Thats terminally online phenomenon only. I once tried to bring that without specifically mentioned that it came from ol Ben himself to folks IRL. Many thought it was some anarchist blabbers.

cyberax

a day ago

This is an exaggeration. Nobody in China cares about what you speak with each other privately, and people talk about stupid policies all the time. The government cares about _public_ actions.

In practical terms, if you're not kind of person who would want to run for an office in the US, China is incredibly comfortable. Cities are safe, with barely any violent crime. Public drug use is nonexistent. And with the US-level AI researcher income, you'd be in the top 0.1% earners.

petcat

a day ago

> nobody in China cares about what you speak with each other privately, and people talk about stupid policies all the time. The government cares about _public_ actions.

https://news.ycombinator.com/item?id=47252833

My comment and the linked video says otherwise. The guy was in a private group chat and said some nasty things about the police for confiscating his motorcycle. Now he's arrested and in the Tiger Chair.

How are we explaining this?

maxglute

a day ago

Group with 75 people. That's a crowd, doesn't matter if gated behind QR code invites. Shit talk cops and gov with the bois is fine. Shit talk / soapbox in a crowd (virtual or real) and get caught or reported = drink tea on the menu.

bdangubic

a day ago

try to protest in america and see how that works out for you long-term. or say protest against genocide in gaza at an uni or generally in public…

cyberax

a day ago

Sigh. Let's not invent things? You can protest anything in the US just fine, with generally no consequences. Heck, our local _high_ _school_ students go out and protest everything to weasel out of classes.

cheema33

a day ago

Trump admin did put people in prison and then deported them, for doing nothing more than protesting.

Not as bad as China sure, but not as good as other civilized nations.

fc417fc802

a day ago

Let's just clarify that visitors don't have the same rights as citizens. Whether or not you agree with the current administration's policies hopefully we can agree that it is entirely reasonable for them to deport foreign political dissidents more or less at their discretion.

If you want to put this to the test try crossing the Canadian border and when they ask you the purpose of your visit respond that it's to attend a protest.

kelvinjps10

an hour ago

But the constitution is not worded as if they don't have the same fundamental rights. Even in other countries, it is the same; this is done to prevent slavery and unjust incarcelation. So visitors have the same fundamental rights to free speech, fair trial, etc. The US has also agreed to international conventions. But the current administration seems to not care

fc417fc802

20 minutes ago

It's definitely not as simple as you're making out. Political speech aside, visas have routinely been cancelled without forewarning for all sorts of reasons historically.

Does someone on a short term visa have the protected right to purchase firearms? Visitors aren't even permitted to get a job without the appropriate type of visa. Being allowed to work is a pretty fundamental right.

I expect there's a difference between the bill of rights and the constitution, and likely further nuance as well.

cheema33

19 hours ago

> Let's just clarify that visitors don't have the same rights as citizens.

Yunseo Chung was not a visitor. She came to the United States from South Korea at age 7. She was arrested last year for peacefully protesting. Charges against her were dropped but the govt. canceled her green card.

The govt. has been trying to deport her since then, but the courts keep blocking it.

https://humanrightsfirst.org/yunseo-chung-v-trump-administra...

While the legality of these actions are being debated in courts, I think most of us can agree that this is reprehensible behavior on part of the Trump admin.

fc417fc802

18 hours ago

I agree that particular example is reprehensible.

I never claimed to condone the actions of the current admin. The examples of people being deported for protesting that I am familiar with are student visa holders. While I don't personally support the examples that I am aware of, I also recognize that in those specific cases the executive branch appears to be within the bounds of the law. I don't even object to the executive branch having the power to cancel the visas of political dissidents in the general case, merely to how they are choosing to apply it.

It's surprising to me to learn that a green card could be revoked for protected speech. That ought to fall well outside the bounds of the law IMO. Green cards and visas are entirely different things.

bdavisx

7 hours ago

>While I don't personally support the examples that I am aware of, I also recognize that in those specific cases the executive branch appears to be within the bounds of the law. I don't even object to the executive branch having the power to cancel the visas of political dissidents

It's my understanding that the 1st amendment applies to everyone, not just citizens. So if that's true (not 100% sure about that), how can political speech (protesting) be a valid reason to remove someone from the US?

fc417fc802

28 minutes ago

Well obviously it can't be if that's true. But is it? What led you to that conclusion?

You can certainly be denied entry for entirely arbitrary reasons. Can you also (as a visa holder) be evicted without notice for same? I think that's generally a safe assumption for any country in the world but would be interested in learning about counterexamples.

xienze

21 hours ago

> Trump admin did put people in prison and then deported them, for doing nothing more than protesting.

Link? I’m guessing we’re going to see that this definition of “protesting” involves being aggressive and directly in the face of law enforcement officers, not merely holding a sign at a distance.

cheema33

19 hours ago

> Link? I’m guessing we’re going to see that this definition of “protesting” involves being aggressive and directly in the face of law enforcement officers, not merely holding a sign at a distance.

Please read up on this one example of a US permanent resident. And then justify the actions of the govt against Yunseo Chung.

https://humanrightsfirst.org/yunseo-chung-v-trump-administra...

bdangubic

a day ago

this is funny if you are being sarcastic

cyberax

a day ago

Oh, I fully support their right to protest.

It just looks a bit ridiculous when students walk out in protest against things that are far outside the influence of their school, city, or even state.

maxglute

a day ago

> get yelled at by people online because the Government wants to use your model

Well duh, as recently demonstrated, an US model used by the US gov will 100% end up murdering actual children sooner than later, in this case less than a calendar year in some far flung war that many Americans do not support. Alternatively PRC model used by CCP might kill in some hypothetical future but for national reunification/rejuvenation that many Chinese support. At the end of the day, researchers and population on one side sleeps more soundly.

leptons

a day ago

Chinese people are very racist towards non-Chinese. It might seem like a happy utopia, but if you aren't Chinese, then you may not really enjoy your time there. It may not be quite as bad as being black in rural US south, but being black (or anything non-Chinese) in China is still not going to be a good time.

WarmWash

a day ago

Racism in even the worse parts of America doesn't even begin to touch the racism present in monocultural/monoracial countries.

Larrikin

a day ago

Have you experienced racism? In Japan atleast, it was evenly applied. That company won't rent to foreigners but this one will. That company won't hire foreigners but this one will. Police will bother you if you ride a bike, but they will be polite while they waste 10 minutes of your time asking for your gaijin card for biking while foreign.

In the US people try to hide it and are far more sinister about it, since there are a lot of laws against obvious racism. The cops are also happy in the US to just kill you.

The racism in the US comes out of hate where as what I experienced abroad was more, we don't think you'll fit in and follow the rules and you have to constantly prove that you can.

I didn't spend too much time in China so maybe it is a racist hell hole.

But my experience in Japan was that white immigrants were way more inclined to make a huge deal about the lighter racism they experienced because they had never been somewhere where their skin color was a disadvantage.

nozzlegear

a day ago

This is a weird argument. Japanese racism is fine because the Japanese are polite and apply it evenly?

Larrikin

a day ago

Despite what some on this site will argue, racism is always bad.

Sabinus

a day ago

"we don't think you'll fit in and follow the rules and you have to constantly prove that you can"

I speculate that if you were a permanent minority instead of a visiting inconvenience, then that 'nice' racism you describe would metastasize into the type of racism you see in the USA. It's more friction from time and exposure added on. And, you know, slavery.

segmondy

20 hours ago

There's a very big difference between xenophobia and racism. Racism is much worse.

kelvinjps10

an hour ago

Xenophobia is as bad as racism. The fundamental difference between xenophobia and racism, it's that one is applied because of where you're from and the other your race. But you can receive the same downsides with both.

losvedir

a day ago

What do you mean by racist? I'm a white/hispanic American and spent 3 months in China and didn't really notice anything problematic towards me.

leptons

19 hours ago

That's great, but it's pretty well known that racism exists in many forms in many countries. Just google "racism in china".

px43

a day ago

Wild to call 1.42 billion people racist despite having met very few of them.

leptons

a day ago

It's funny that you think you know who I've met. YOU DON'T KNOW ME.

jamespo

a day ago

Damn that social conscience, huh?

mmaunder

a day ago

Yeah that was my first thought is it’s a tit for tat poach. They got the Gemini researcher so google responded in kind.

lynndotpy

a day ago

Well, the problem aren't just the NSF funding cuts. Everyone else is already dumping truckloads of cash. There's also the public health situation (who wants measles or polio?), the risk of retaliatory attacks from the countries we're at war with, etc. You could write paragraphs about why the US is less attractive to researchers.

When I was a deep learning PhD in the first Trump administration, US universities were already very deeply affected by the Muslim ban, and so a lot of talent ended up in other countries.

Sibling commentators are rightfully pointing out that foreigners, especially those who would not be recognized as white, face an onerous and risky customs process with long-term and increasing risks of deportation. When you see a headline like the NIST labs abruptly restricting foreign scientists, _everything_ else feels uncertain. Even if someone doesn't believe they're personally at risk for deportation, they're still seeing everything else.

And then it all boils down to a reputational thing. The era where we were the top choice for research is in the past. If you start a PhD in the US on your resume during this era, you might be anticipating how you'll answe the question of why you weren't good enough to get accepted somewhere better.

sciencejerk

15 hours ago

Where do researchers go instead of USA then? Genuinely curious

lynndotpy

3 hours ago

Canada has always been appealing for people who were set on the United States but taken aback by the political climate. This goes back decades (famously in the AI space, Reagan is why Canada got Hinton instead of the US) but is exacerbated as of late. China also has pretty amazing investment in tech companies and research institutions, but Mandarin doesn't yet enjoy the same widespread adoption of English

Big caveat that I have the perspective of just one US-based former academic.

expedition32

a day ago

If memory serves the father of the Chinese bomb studied in America and went back. It may be inconceivable to Americans but Chinese patriotism exists.

Besides you can live a comfortable life in PRC nowadays or live in a racist America.

bilbo0s

a day ago

They probably have tried, but you have to have more cash than those researchers feel they can get starting their own lab. When you consider the fact that their new startup lab would have the entire nation of China as, in effect, a captive market; you start to see how almost any amount of money would be too little to convince them not to make a run at that new startup. If money is their aim.

I think Alibaba needs to just give these guys a blank check. Let them fill it in themselves. Absent that, I'm pretty sure they'll make their own startup.

I do think it'd be a big loss for the rest of the world though if they close whatever model their startup comes up with.

simgt

a day ago

> I do think it'd be a big loss for the rest of the world though if they close whatever model their startup comes up with.

That's very likely to happen once the gap with OpenAI/Anthropic has been closed and they managed to pop the bubble.

bobthepanda

a day ago

I don’t know, the EV bubble deflated and Chinese firms are still pumping them out with subsidies like their life depends on it.

nopurpose

13 hours ago

How do those companies make money? Qwen, GLM, Kimi, etc all released for free. I have no experience in the field, but from reading HN alone my impression was training is exceptionally costly and inference can be barely made profitable. How/why do they fund ongoing development of those models? I'd understand if they release some of their less capable models for street cred, but they release all their work for free.

theshrike79

8 hours ago

Chinese companies don't always operate on purely capitalistic principles, there is sometimes government direction in the background.

For China, the country, it's a good thing if American AI companies have to scramble to compete with Chinese open models. It might not be massively profitable for the companies producing said models, but that's only a part of the equation

miki123211

2 hours ago

China seems to combine the best points of capitalism (many companies taking many shots on goal, instead of the eastern bloc way of one centrally-mandated solution that either works or not) with the best points of communism (state-sponsored industries that don't have to generate a profit, for the glory and benefit of the state).

theshrike79

42 minutes ago

There is a certain advantage to being able to go "I want a factory city here, that will manufacture ... Toasters"

gmerc

6 hours ago

How do US tech companies make money? They don't until the competition has been starved.

rwmj

12 hours ago

The small spend may be worth it to destroy US proprietary AI companies.

indrora

12 hours ago

Ostensibly, a mix of VC funding and that they host an endpoint that lets them run the big (200+GB) models on their infrastructure rather than having to build machines with hundreds of gigs of llm-dedicated memory.

wongarsu

11 hours ago

But on inference they have to compete with other inference provider that just has a homepage, a bunch of GPUs running vllm and none of the training cost. Their only real advantage are the performance optimizations that they might have implemented in their inference clusters and not made public

MarsIronPI

3 hours ago

Qwen, at least, IIRC has some proprietary models, specifically the Max series. IIRC these have larger context windows.

vicchenai

a day ago

Been running the 32B locally for a few days and honestly surprised how well it handles agentic coding stuff. Definitely punches above its weight. Only complaint is it sometimes decides to ignore half your prompt when instructions get long, but at this size I guess thats the tradeoff.

mmis1000

19 hours ago

> Only complaint is it sometimes decides to ignore half your prompt when instructions get long

This sounds like your context is too big and getting cut off.

ramgine

17 hours ago

Running 32b on what hardware?

nurettin

15 hours ago

I'm running it on a pure cpu 2020 model ryzen server with 2x128 GB RAM with llama.cpp, it seems as intelligent as gpt4. I optimized a little bit by forcing it to run on a single RAM stick and tuning llama.cpp build parameters, going from 3-5 tok/s to a more acceptable 5-8 tok/s.

It can call tools and reason adequately enough to use them when appropriate.

ramgine

5 hours ago

Does keeping it on one stick make it more performant? I have a epyc server with 1tb of 64 gig sticks and a 3060, looking to maximize what models I can run on it

airstrike

a day ago

I'm hopeful they will pick up their work elsewhere and continue on this great fight for competitive open weight models.

To be honest, it's sort of what I expected governments to be funding right now, but I suppose Chinese companies are a close second.

RandyOrion

16 hours ago

First, thank you Junyang and Qwen team for your incredible work. You deserve better.

This is sad for local LLM community. First we lost wizardLM, Yi and others, then we lost Llama and others, now we lost Qwen...

skeeter2020

a day ago

Getting a bit of whiplash goin from AI is replacing people, to AI is dead without (these specific) people. Surely we're far enough ahead that AI can take it from here?

Wild times!

janalsncm

a day ago

Anthropic has one nine of uptime right now. One.

https://status.claude.com/

If AI could effectively replace people, you wouldn’t need CEOs to keep trying to convince people.

OsrsNeedsf2P

a day ago

That's 99% is two nines?

oefrha

19 hours ago

I would take their displayed uptime with a huge grain of salt. The other day Claude Code and claude.ai web were completely unavailable for me (Claude Code got into logged out state and couldn’t even log in) for at least two hours, they showed hours of “elevated errors”, yet not a single minute of downtime was recorded. And then there was yet another outage finally with recorded downtime a few hours later…

Edit: This incident: https://status.claude.com/incidents/kyj825w6vxr8

janalsncm

a day ago

It was 98.xx this morning when I posted.

jefftk

21 hours ago

This is pretty pedantic, but I think it's usually rounded. 1=90%, 2=99%, 3=99.9%. I'd say 98% is "not even two nines" but not "one nine".

janalsncm

21 hours ago

Honestly my impression was the “nines” of reliability just means how many nines your reliability starts with, as a decimal. I never thought much about it though.

I will also say it’s amusing that the debate is between one and two nines. Neither is objectively great. If you built a system with >3.65 days of downtime in a year that wouldn’t be something you’d brag about in an interview.

rprend

20 hours ago

Anthropic is a great case study in why uptime doesn’t matter. The service is so valuable that you can have one nine uptime and add $9bil ARR in 3 months.

Jeremy1026

a day ago

Anthropic also fires off the alarm bells seemingly at any sign of issue. I've personally only noticed an outage once, and the status page wasn't even showing it as down at that time. It eventually did update about 45 minutes later, then I was back up and running another 15 minutes later but the "outage" on the status page stayed up for another hour or so.

Probably good to sent alerts early, but they might be going a bit too early.

levocardia

16 hours ago

"product market fit is when people are ripping the product out of your hands and everything is breaking constantly" - seems bullish to me

kylemaxwell

a day ago

Everything on that page has two nines, so not sure what you're trying to say here.

relaxing

a day ago

Right now everything on that page is 98 point something, so it must be fluctuating.

janalsncm

a day ago

This morning it was less than 99% which is one nine of reliability.

In any case, two nines of reliability is not impressive.

mungoman2

a day ago

Not sure what the uptime is meant to signal. People have quite low uptime as well…

jug

a day ago

Huh? Servers aren't people and thus have completely different expectations, or what am I missing here

px43

a day ago

9% uptime?

AgentME

a day ago

One 9 would be 90% (aka 0.9)

janalsncm

21 hours ago

9% would be 0.09 which is no nines.

vidarh

a day ago

Who is suggesting "AI is dead without (these specific) people"? People are wondering what it means specifically for the Qwen model family.

mhitza

a day ago

We've gone from AGI goals to short-term thinking via Ads. That puts things better in perspective, I think.

dude250711

a day ago

Claude is incapable of producing a native application for itself, and is bad enough with web ones to justify Anthropic acquiring Bun.

quantum_state

a day ago

I would second that Qwen3.5 is exceptionally good. In a calibration, it (35b variant) was running locally with Ada NextGen 24GB to do the same things with easy-llm-cli in comparison with gemini-cli + Gemini 3 Pro, they were at par … really impressive it ran pretty fast …

vardalab

a day ago

q4 quant gives you 175 tg and 7K pp, beats most cloud providers

qwenverifier

21 hours ago

As a mathematician, lately I experimented a lot with Qwen, to produce as good as possible professional summaries and relations between articles, and in one case even a verification of misattributions claims which was used in an arXiv article.

All is collected in https://imar.ro/~mbuliga/ai-talks.html

w10-1

a day ago

It sounds like the lead was demoted to attract new talent, quit as a result, and the rest of the team also resigned to force management to change their minds.

If so, I'm happy that the team held together, and I hope that endogenous tech leads get to control their own career and tech destiny after hard work leads to great products. (It's almost as inspiring as tank man, and the tank commanders who tried to avoid harming him...)

(ducking the downvote for challenging the primacy of equity...)

MarsIronPI

3 hours ago

Wherever they end up next, I hope they can stick together as a team and that they keep insisting on publishing their models.

nurettin

a day ago

I am singularly impressed by 35B/A3, hope that is not the reason he had to leave.

zoba

a day ago

I tried the new qwen model in Codex CLI and in Roo Code and I found it to be pretty bad. For instance I told it I wanted a new vite app and it just started writing all the files from scratch (which didn’t work) rather than using the vite CLI tool.

Is there a better agentic coding harness people are using for these models? Based on my experience I can definitely believe the claims that these models are overfit to Evals and not broadly capable.

sosodev

a day ago

I've noticed that open weight models tend to hesitate to use tools or commands unless they appeared often in the training or you tell them very explicitly to do so in your AGENTS.md or prompt.

They also struggle at translating very broad requirements to a set of steps that I find acceptable. Planning helps a lot.

Regarding the harness, I have no idea how much they differ but I seem to have more luck with https://pi.dev than OpenCode. I think the minimalism of Pi meshes better with the limited capabilities of open models.

malwrar

a day ago

+1 to this, anecdotally I’ve found in my own evaluations that if your system prompt doesn’t explicitly declare how to invoke a tool and e.g. describe what each tool does, most models I’ve tried fail to call tools or will try to call them but not necessarily use the right format. With the right prompt meanwhile, even weak models shoot up in eval accuracy.

mongrelion

2 hours ago

> [...] _but not necessarily use the right format._

This has also been my experience. But isn't the harness sending the instructions on how to invoke a tool? Maybe it is missing the formatting part. What do you think?

vardalab

a day ago

Have frontier lab do the plan which is the most time consuming part anyways and then local llm do the implementation. Frontier model can orchestrate your tickets, write a plan for them and dispatch local llm agents to implement at about 180 tokens/s, vllm can probably ,manage something like 25 concurrent sessions on RTX 6000 Do it all in a worktrees and then have frontier model do the review and merge. I am just a retired hobbyist but that's my approach, I run everything through gitea issues, each issue gets launched by orchestrator in a new tmux window and two main agents (implementer and reviewer get their own panes so I can see what's going on). I think claude code now has this aspect also somewhat streamlined but I have seen no need to change up my approach yet since I am just a retired hobbyist tinkering on my personal projects. Also right now I just use claude code subagents but have been thinking of trying to replace them with some of these Qwen 3.5 models because they do seem cpable and I have the hardware to run them.

Tepix

a day ago

What is "the new qwen model"? There are a dozen and you can get them in a dozen different quantizations (or more) which are of different quality each.

lreeves

a day ago

In my experience Qwen3.5/Qwen3-Coder-Next perform best in their own harness, Qwen-Code. You can also crib the system prompt and tool definitions from there though. Though caveat, despite the Qwen models being the state of the art for local models they are like a year behind anything you can pay for commercially so asking for it to build a new app from scratch might be a bit much.

ilaksh

a day ago

Does anyone know when the small Qwen 3.5 models are going to be on OpenRouter?

armanj

a day ago

ilaksh

a day ago

Like 4B, 2B, 9B. Supposedly they are surprisingly smart.

Sakthimm

a day ago

Yep. The 9B has excellent image recognition. I showed it a PCB photo and it correctly identified all components and the board type from part numbers and shape. OCR quality was solid. Tool calling with opencode worked without issues, but general coding ability is still far from sonnet-tier. Asked it to add a feature to an existing react app, it couldn't produce an error-free build and fell into a delete-redo loop. Even when I fixed the errors, the UI looked really bad. A more explicit prompt probably would have helped. Opus one-shotted it, same prompt, the component looked exactly as expected.

But I'll be running this locally for note summarization, code review, and OCR. Very coherent for its size.

BoredomIsFun

12 hours ago

> Very coherent for its size.

I found them to be less than stellar at writing coherent prose. Qwen 3.5 9b was worse in my tests than Gemma 3 4b.

raffael_de

a day ago

> me stepping down. bye my beloved qwen.

the qwen is dead, long live the qwen.

lacoolj

a day ago

I wonder if an american company poached one/all of them. They've been pretty much bleeding edge of open models and would not surprise me if Amazon or Google snatched them up

ferfumarma

a day ago

It would surprise me if they're willing to come to the US in the setting of the current DHS and ICE situation.

Were they kneecapped by Anthropic blocking their distillation attempts?

zozbot234

a day ago

What Anthropic was complaining about is training on mass-elicited chat logs. It is very much a ToS violation (you aren't allowed to exploit the service for the purpose of building a competitor) so the complaint is well-founded but (1) it's not "distillation" properly understood; it can only feasibly extract the same kind of narrow knowledge you'd read out from chat logs, perhaps including primitive "let's think step by step" output (which are not true fine-tuned reasoning tokens); because you have no access to the actual weights; and (2) it's something Western AI firms are very much believed to do to one another and to Chinese models all the time anyway. Hence the brouhaha about Western models claiming to be DeepSeek when they answer in Chinese.

red2awn

a day ago

The "distillation attacks" are mostly using Claude as LLM-as-a-judge. They are not training on the reasoning chains in a SFT fashion.

zozbot234

a day ago

So they're paying expensive input tokens to extract at best a tiny amount of information ("judgment") per request? That's even less like "distillation" than the other claim of them trying to figure out reasoning by asking the model to think step by step.

red2awn

a day ago

LLM-as-a-judge is quite effective method to RL a model, similar to RLHF but more objective and scalable. But yes, anthropic is making it more serious than it is. Plus DeepSeek only did it for 125k requests, significantly less than the other labs, but Anthropic still listed them first to create FUD.

sowbug

4 hours ago

Unless you know something we don't, Alibaba hasn't been accused of distilling or stealing any Anthropic assets.

hwers

a day ago

My conspiracy theory hat is that somehow investors with a stake in openai as well is sabotaging, like they did when kicking emad out of stabilityai

storus

a day ago

More likely some high ranking party member's nepobaby from Gemini sniffed success with Qwen and the original folks just walked away as their reward disappeared.

ahmadyan

a day ago

source?

WarmWash

a day ago

There is no source. But the party in China does have ultimate control.

There would never be an Anthropic/Pentagon situation in China, because in China there isn't actually separation between the military and any given AI company. The party is fully in control.

liuliu

a day ago

apples v.s. oranges. The later is true, Emad did get sabotaged (for not being able to raise money in time, about 8-month before he's leaving). Junyang didn't have that long arc of incidents.

jongjong

20 hours ago

Interesting reading this. It reminds me of my time in cryptocurrency sector. I suspected that some team members were paid by Ethereum folks to sabotage our project. Why do I suspect Ethereum? Because our project founders ended up switching to the Ethereum ecosystem and ignored/suppressed better solutions from their own ecosystem. I think there's something about tech hype which attracts these kinds of people who like to play dirty.

kartika848484

a day ago

what the hell, their models were promising tho

speedping

19 hours ago

open blogpost → ⌘ + F "pelican" → 0 results ???