FLUX.2: Frontier Visual Intelligence

359 pointsposted 2 days ago
by meetpateltech

116 Comments

vunderba

2 days ago

Updating the GenAI comparison website is starting to feel a bit Sisyphean with all the new models coming out lately, but the results are in for the Flux 2 Pro Editing model!

https://genai-showdown.specr.net/image-editing

It scored slightly higher than BFL's Kontext model, coming in around the middle of the pack at 6 / 12 points.

I’ll also be introducing an additional numerical metric soon, so we can add more nuance to how we evaluate model quality as they continue to improve.

If you're solely interested in seeing how Flux 2 Pro stacks up against the Nano Banana Pro, and another Black Forest model (Kontext), see here:

https://genai-showdown.specr.net/image-editing?models=km,nbp...

Note: It should be called out that BFL seems to support a more formalized JSON structure for more granular edits so I'm wondering if accuracy would improve using it.

woolion

a day ago

The comparison are very useful but also quite limited in terms of styles. Models tend to have extremely diverse abilities in following a given style against steering to its own.

It's pretty obvious that OpenAI is terrible at it -- it is known for its unmissable touch. However, for Flux it really depends on the style. They already posted at some point that they changed their training to avoid averaging different styles together, which is the ultimate AI look. But this is at odds with the goal to directly generate images that are visually appealing, so the style matching is going to be a problem for a while, at least.

vunderba

18 hours ago

The site is broken up into "Editing Comparison" and a "Generative Comparison" sections.

Generative: https://genai-showdown.specr.net

Editing: https://genai-showdown.specr.net/image-editing

Style is mostly irrelevant for editing, since the goal is to integrate seamlessly with the existing image. The focus is on performing relatively surgical edits or modifications to existing imagery while minimizing changes to the rest of the image. It is also primarily concerned with realism, though there are some illustrative examples (the JAWS poster, Great Wave off Kanagawa).

This contrasts with the generative section though even then the emphasis is on prompt adherence, and style/fidelity take a backseat (which is honestly what 99% of existing generative benchmarks already focus on).

woolion

15 hours ago

Oh, thank you for your reply. We may have different definitions of style and what editing would mean.

If you look for example at "Mermaid Disciplinary Committee", every single image is in a very different style, each that you can consider a default of what the model assume would be for the specific prompt. It's quite obvious that these styles were 'baked in' the models, and it's not clear how much you can steer in a specific style. If you look at "The Yarrctic Circle", a lot more models default to a kind of "generic concept art" style (the "by greg rutkowski" meme) but even then I would classify the results as at least 5 distinct styles. So for me this benchmark is not checking style at all, unless you consider style to be just around 4 categories (cartoon, anime, realistic, painterly).

So regarding image editing, I did my own tests at the first release of Flux tools, and found that it was almost impossible to get any decent results on some specific styles, specifically cartoon and concept art styles. I think the tools focus on what imaginary marketing people would want (like "put this can of sugary beverage into an idyllic scene") rather than such use cases. So editing like "color this" or other changes would just be terrible, and certainly unusable.

woolion

15 hours ago

I didn't go very far with my own benchmarks because my results were just so bad. But for example, here's a line art with the instruction to color it (I can't remember the prompt, I didn't take notes).

https://woolion.art/assets/img/ai/ai_editing.webp

It's original, ChatGPT, Flux.

Still, you can see that ChatGPT just throw everything out and does not do a minimal attempt at respecting style. Flux is quite bad, but it follows the design much more (although it gets completely confused by it) that it seems that with a whole lot of work you could get something out of it.

vunderba

15 hours ago

Yeah so NOVEL style transfer without the use of a trained LoRA is, to my knowledge, still a relatively unsolved problem. Even in SOTA models like Nano Banana Pro, if you attach several images with a distinct artistic style that is outside of its training data and use a prompt such as:

"Using the attached images as stylistic references, create an image of X"

It's fall down pretty hard.

https://imgur.com/a/o3htsKn

woolion

13 hours ago

I'm pretty sure that some model at least advertised that it would work. I also think your example was in the training data at some point least, but I suspect these styles are kind of pruned when the models are steered towards "aesthetically pleasing" outputs which are often used as benchmarks. Thanks for the replies, it's quite informative.

vunderba

10 hours ago

Sure! So that image was pretty zoomed out, I've gone ahead and attached some of the reference images in greater detail:

https://imgur.com/a/failed-style-transfer-nb-pro-o3htsKn

Now you should be able to see that the generated image is stylistically not even close to the references (which are early works by Yoichi Kotabe). Pay careful attention to the characters.

With locally hostable models, you can try things like Reference/Shuffle ControlNets but that's not always successful either.

spaceman_2020

a day ago

Clearly Google is winning this by some margin

Seedream is also very good and makes me think the next version will challenge Google for SOTA image gen

Increasingly feels like image gen is a solved problem

raxxorraxor

18 hours ago

I think the margin isn't that large to be honest. If we compare available resources and data it is quite tiny and perhaps should be larger.

Also it doesn't feel solved to me at all. There is no general model, perhaps it cannot reasonably exist. I think these tests are benchmarks are smart, but they don't show the whole picture.

Domain specific image generation tasks still require a domain specific models. For art purposes SD1.5 with specialized and finely tuned checkpoints will still provide the best results by far. It is also limited, but I think it dampened the hype for new image generators significantly.

spunker540

16 hours ago

Does SD1.5 suffer from resolution / coherence / complexity issues?

I understand most outputs could be fine tuned for most domains, but still felt sd1.5 had a resolution ceiling, and a complexity ceiling no matter how good the fine tuning

raxxorraxor

2 hours ago

Yes, the toolchains around it can alleviate it, but only to a degree. You more or less dependent on a fine tune specifically trained for the things you want. But if you have that, the image quality is usually far better than from any generic model in my opinion, aside from resolution.

Merging any or all concepts is mostly beyond it, but I haven't seen any model being good at it yet. There are some that are significantly better, but often come with other disadvantages.

Overall what these models can do is quite impressive. But if you want a really high quality image, finding the fitting model is as difficult as finding the right prompt. And the general models tend to always fall back to some mean AI standard image.

vunderba

14 hours ago

Yeah SD 1.5 is mostly trained on datasets of resolution of 512x512. That's why you'd get crazy multi-limb goro abominations if you pushed checkpoints too much higher than 768x768 without either using a Hires Fix or Img2Img.

There's not much of a reason to use SD 1.5 over SDXL if image quality is paramount.

A lot of people (myself included) use a pipeline that involves using Flux to get the basic action / image correct, then SDXL as a refiner and finally a decent NMKD-based upscaler.

ttul

20 hours ago

Prompt understanding will only ever be as good as the language embeddings that are fed into the model’s input. Google’s hardware can host massive models that will never be run on your desktop GPU. By contrast, Flux and its kin have to make do with relatively tiny LLMs (Qwen Image uses a 7B-param LLM).

bn-l

16 hours ago

Hey I hope you see this. The scoring needs to be a 0-10 or something with a range rather than pass or fail. Flux one getting the same score for the surfer as Gemini pro 3 reduces the quality of the benchmark.

vunderba

10 hours ago

Hi bn-l, yeah as mentioned above and in the Release Notes - we'll be adding a more nuanced numerical score in the next week.

I don't know if I'm going to get as granular as 1-10 only because the finer the scoring - the more potential for subjectivity. That's why it was initially set up as a "Minimum Passing Criteria Rule Set" along with a Pass/Fail grade.

A suggestion from a previous HN post was something along the lines of (0 Fail, 0.5 Technical Pass, 1.0 Proficient Pass).

sroussey

18 hours ago

On the site: s/sttae/state/g

echelon

a day ago

How much energy does BFL have to keep playing this game against Google and ByteDance (SeeDream)?

If their new fancy model is only middle of the pack, and they're not as open source as the Chinese Qwen image models (or ByteDance / Alibaba / Lightricks video models), what's the point?

It's not just prompt adherence, the image quality of Flux models has been pretty bad. Plastic skin, inhumanely chiseled chins, that general faux "AI" aura.

Indeed, the Flux samples in your test suite that "pass" look God-awful. It might "pass" from a technical standpoint, but there's no way I'd choose Flux to solve my workflows. It looks bad.

(I wonder if they lack people on their data team with good aesthetic taste. It may be as simple as that.)

I think this company is struggling. They're pinned between Google and the Chinese. It's a tough, unenviable spot to be in.

I think a lot of the foundation model companies in media are having a really hard time: RunwayML, PikaLabs, LumaLabs. Some of them have pivoted hard away from solving media for everyone. I don't think they can beat the deep-pocketed hyperscalers or the Chinese ecosystem.

BFL just raised a massive round, so what do I know? I just can't help but feel that even though Runway raised similar money, they're struggling really hard now. And I would really not want to be fighting against Google who is already ahead in the game.

latentspacer

a day ago

i may be wrong, but it doesn't seem like BFL is struggling to me. they were apparently founded in august 2024, and have already signed $100M+ revenue deals with customers like meta (https://www.bloomberg.com/news/articles/2025-09-09/meta-to-p...)

in fact, it seems like BFL has benefited a lot by becoming the go-to alternative for big enterprise customers who don't want to be dependent on google

Bombthecat

17 hours ago

The contract is still going / will be going on in 2026?

echelon

a day ago

Wow, I didn't hear about this. That's impressive, and kudos to the team.

That's why they raised the massive round, then.

But this just leads to more questions - I have to wonder if and for how long this is just going to be to plug in a gap for Meta's own AI product offering. At some point they'll want to build their own in-house models or perhaps just acquire BFL. Zuckerberg would not be printing AI data centers if that wasn't the case.

From a PLG standpoint, Flux isn't really what graphics designers are choosing for their work. The generations look worse than OpenAI's "piss filter". But aesthetics might not be the play the team is going after.

Hopefully they don't just raise all of this dry powder energy and burn it trying to race Google. They should start listening to designers and get in their good graces if their intent is to build tools for art and graphics design work.

A good press release would consist of lots of good looking images and a video of workflows that save artists time. This press release doesn't connect with graphics designers at all and it reads as if they aren't even the audience.

If it's something else, more "enterprise", that BFL is after, then maybe I don't know the strategy or game plan.

latentspacer

a day ago

idk it seems pretty clear BFL’s target market is developers not graphic designers. and for developers at scale like Meta and Adobe, it’s pretty incredible a tiny startup like BFL has become the primary alternative to Google with 1/100th of the resources within 12 months of their founding, doing hundreds of millions of revenue

the Chinese models are great, but no serious enterprise developer is going to bet their image workloads at scale in production on Chinese models if the market evolves anything like past developer infrastructure

throwaway314155

17 hours ago

How is an image generation model serving the market of...developers? I mean I know we all focus on these models and get excited about what they can do. But why would we pay for them for more than a few tests?

rhdunn

a day ago

Reading the post the architectural change is combining a vision model (Mistral 3 in the flux.2 case) with a rectified flow transformer.

I wonder if this architectural change makes it easier to use other vision models such as the ones in Llama 3 and 4, or possibly a future Llama 5.

vunderba

a day ago

Sadly, I tend to agree. I'm rooting for BFL, but the results from this latest model (the Pro version, of all things) have just been a bit disappointing. Google’s release of NB Pro last week certainly didn’t help either, since it set the bar so incredibly high.

Flux 2 Pro only scored a single point higher than the Kontext models they released over half a year ago.

The text-to-image side was even more frustrating. It often felt like it was actively fighting me, as evidenced by the high number of re-rolls required before it passed some of the tests (Cubed⁵, for example).

spyder

2 days ago

Great, especially that they still have an open-weight variant of this new model too. But what happened to their work on their unreleased SOTA video model? did it stop being SOTA, others got ahead, and they folded the project, or what? YT video about it: https://youtu.be/svIHNnM1Pa0?t=208 They even removed the page of that: https://bfl.ai/up-next/

liuliu

2 days ago

As a startup, they pivoted and focused on image models (they are model providers, and image models often have more use cases than video models, not to mention they continue to have bigger image dataset moat, not video).

echelon

a day ago

> bigger image dataset moat

If they have so much data, then why do Flux model outputs look so God-awful bad?

They have plastic skin, weird chins, and have that "AI" aura. Not the good AI aura, mind you. The cheap automated YouTube video kind that you immediately skip.

Flux 2 seems to suffer from the exact same problems.

Midjourney is ancient. Their CEO is off trying to build a 3D volume and dating companion or some nonsense and leaving the product without guidance and much change. It almost feels abandoned. But even so, Midjourney has 10,000x better aesthetics despite having terrible prompt adherence and control. Midjourney images are dripping with magazine spread or Pulitzer aesthetics. It's why Zuckerberg went to them to license their model instead of quasi "open source" BFL.

Even SDXL looks better, and that's a literal dinosaur.

Most of the amazing things you see on social media either come from Midjourney or SDXL. To this day.

SV_BubbleTime

a day ago

>Even SDXL looks better, and that's a literal dinosaur.

I’m not saying you are wrong in effect, but for reference just slightly over 2 years ago was SDZL released, and it took about a year to have great fine tunes.

andersa

2 days ago

I heard a possibly unsubstantiated rumor that they had a major failed training run with the video model and canceled the project.

qoez

2 days ago

Makes no sense since they should have checkpoints earlier in the run that they could restart from and they should have regular checks that keep track if a model has exploded etc.

embedding-shape

2 days ago

I didn't read "major failed training run" as in "the process crashed and we lost all data" but more like "After spending N weeks on training, we still didn't achieve our target(s)", which could be considered "failing" as well.

echelon

a day ago

They could have done what Lightricks did with LTX-1 - build almost embarrassingly small models in the open and iteratively improve from learning.

LTX's first model felt two years behind SOTA when it launched, but they viewed it as a success and kept going.

The investment initially is low and can scale with confidence.

BFL goes radio silent and then drops stuff. Now they're dropping stuff that is clearly middle of the pack.

Going from launching SOTA models to launching "embarrassingly small models" isn't something investors generally are into, specially when you're thinking about what training runs to launch and their parameters. And since BFL has investors, they have to make choices that try to maximize ROI for investors rather than the community at large, so this is hardly surprising.

There's always a possibility that something implicit to the early model structure causes it to explode later, even if it's a well known, otherwise stable architecture, and you do everything right. A cosmic bit flip at the start of a training run can cascade into subtle instability and eventual total failure, and part of the hard decision making they have to do includes knowing when to start over.

I'd take it with a grain of salt; these people are chainsaw jugglers and know what they're doing, so any sort of major hiccup was probably planned for. They'd have plan b and c, at a minimum, and be ready to switch - the work isn't deterministic, so you have to be ready for failures. (If you sense an imminent failure, don't grab the spinny part of the chainsaw, let it fall and move on.)

latentspacer

a day ago

lol, unless I’m wrong, that is not how model development works

a ‘major training run’ only becomes major after you sample from it iteratively every few thousand steps, check its good, fix your pipeline, then continue

almost by design, major training runs don’t fail

if I had to guess, like most labs. they’ve probably had to reallocate more time and energy to their image models than expected since the AI image editing market has exploded in size this year, and will do video later

rhdunn

a day ago

It could be that they weren't able to produce stable video -- i.e. getting a consistent look across frames. Video is more complex than image because of this. If their architecture couldn't handle that properly then no amount of training would fix it.

If they found that their architecture worked better on static images then it is better to pivot to that than wasting the effort. Especially if you have a trained model that is good at producing static images and bad at generating video.

echelon

2 days ago

Image models are more fundamentally important at this stage than video models.

Almost all of the control in image-to-video comes through an image. And image models still needs a lot of work and innovation.

On a real physical movie set, think about all of the work that goes into setting the stage. The set dec, the makeup, the lighting, the framing, the blocking. All the work before calling "action". That's what image models do and must do in the starting frame.

We can get way more influence out of manipulating images than video. There are lots of great video models and it's highly competitive. We still have so much need on the image side.

When you do image-to-video, yes you control evolution over time. But the direction is actually lower in terms of degrees of freedom. You expect your actors or explosions to do certain reasonable things. But those 1024x1024xRGB pixels (or higher) have way more degrees of freedom.

Image models have more control surface area. You exercise control over more parameters. In video, staying on rails or certain evolutionary paths is fine. Mistakes can not just be okay, they can be welcome.

It also makes sense that most of the work and iteration goes into generating images. It's a faster workflow with more immediate feedback and productivity. Video is expensive and takes much longer. Images are where the designer or director can influence more of the outcomes with rapidity.

Image models still need way more stylistic control, pose control (not just ControlNets for limbs, but facial expressions, eyebrows, hair - everything), sets, props, consistent characters and locations and outfits. Text layout, fonts, kerning, logos, design elements, ...

We still don't have models that look as good as Midjourney. Midjourney is 100x more beautiful than anything else - it's like a magazine photoshoot or dreamy Instagram feed. But it has the most lackluster and awful control of any model. It's a 2021-era model with 2030-level aesthetics. You can't place anything where you want it, you can't reuse elements, you can't have consistent sets... But it looks amazing. Flux looks like plastic, Imagen looks cartoony, and OpenAI GPT Image looks sepia and stuck in the 90's. These models need to compete on aesthetics and control and reproducibility.

That's a lot of work. Video is a distraction from this work.

cubefox

2 days ago

Hot take: text-to-image models should be biased toward photorealism. This is because if I type in "a cat playing piano", I want to see something that looks like a 100% real cat playing a 100% real piano. Because, unless specified otherwise, a "cat" is trivially something that looks like an actual cat. And a real cat looks photorealistic. Not like a painting, or cartoon, or 3D render, or some fake almost-realistic-but-cleary-wrong "AI style".

85392_school

2 days ago

FYI: photorealism is art that imitates photos, and I see the term misused a lot both in comments and prompts (where you'll actually get subideal results if you say "photorealism" instead of describing the camera that "shot" it!)

cubefox

2 days ago

I meant it here in the sense of "as indistinguishable from a photo as the model can make it".

echelon

a day ago

"style" is apt for many reasons.

I've heard chairs of animation departments say they feel like this puts film departments under them as a subset rather than the other way around. It's a funny twist of fate, given that the tables turned on them ages ago.

Photorealistic models are just learning the rules of camera optics and physics. In other "styles", the models learn how to draw Pixar shaded volumes, thick lines, or whatever rules and patterns and aesthetics you teach.

Different styles can reinforce one another across stylistic boundaries and mixed data sets can make the generalization better (at the cost of excelling in one domain).

"Real life", it seems, might just be a filter amongst many equally valid interpretations.

minimaxir

2 days ago

As Midjourney has demonstrated, the median user of AI image generation wants those aesthetic dreamy images.

cubefox

2 days ago

I think it's more likely this is just a niche that Midjourney has occupied.

loudmax

2 days ago

If Midjourney is a niche, then what is the broader market for AI image generation?

Porn, obviously, though if you look at what's popular on civitai.com, a lot of it isn't photo-realistic. That might change as photo-realistic models are fully out of the uncanny valley.

Presumably personalized advertising, but this isn't something we've seen much of yet. Maybe this is about to explode into the mainstream.

Perhaps stock-photo type images for generic non-personalized advertising? This seems like a market with a lot of reach, but not much depth.

There might be demand for photos of family vacations that didn't actually happen, or removing erstwhile in-laws from family photos after a divorce. That all seems a bit creepy.

I could see some useful applications in education, like "Draw a picture to help me understand the role of RNA." But those don't need to be photo-realistic.

I'm sure people will come up with more and better uses for AI-generated images, but it's not obvious to me there will be more demand for images that are photo-realistic, rather than images that look like illustrations.

dragonwriter

11 hours ago

> Porn, obviously, though if you look at what's popular on civitai.com, a lot of it isn't photo-realistic.

I don't have an argument to make on the main point, but Civitai has a whole lot of structural biases built into it (both intentionally and as side effects of policies that probably aren't intended to influence popularity in the way they do) that I would hesitate to use "what is popular on Civitai" as a guide to "what is attractive to (or commercially viable in) the market", either for AI imagery in general or for AI imagery in the NSFW domain specifically.

kevin_thibedeau

15 hours ago

> what is the broader market for AI image generation?

Replace commercial stock imagery. My local Home Depot has a banner by one of the cash registers with an AI house replete with mismatched trim and weird structural design but it's passable at a glance.

echelon

2 days ago

> If Midjourney is a niche, then what is the broader market for AI image generation?

Midjourney is one aesthetically pleasing data point in a wide spectrum of possibilities and market solutions.

Creator economy is huge and is outgrowing Hollywood and the Music Industry combined.

There's all sorts of use cases in marketing, corporate, internal comms.

There are weird new markets. A lot of people simply subscribe to Midjourney for "art therapy" (a legit term) and use it as a social media replacement.

The giants are testing whether an infinite scroll of 100% AI content can beat human social media. Jury's out, but it might start to chip away at Instagram and TikTok.

Corporate wants certain things. Disney wants to fine tune. They're hiring companies like MoonValley to deliver tailored solutions.

Adobe is building tools for agencies and designers. They are only starting to deliver competent models (see their conference videos), and they're going about this a very different way.

ChatGPT gets the social trend. Ghibli. Sora memes.

> Porn, obviously, though if you look at what's popular on civitai.com, a lot of it isn't photo-realistic.

Civitai is circling the drain. Even before the unethical and religious Visa blacklisting, the company was unable to steer itself to a Series A. Stable Diffusion and local models are still way too hard for 99.99% of people and will never see the same growth as a Midjourney or OpenAI that have zero sharp edges and that anyone in the world can use. I'm fairly certain an "OnlyFans but AI" will arise and make billions of dollars. But it has to be so easy a tucker who doesn't learn to code can use it from their 11 year old Toshiba.

> Presumably personalized advertising, but this isn't something we've seen much of yet.

Carvana pioneered this almost five years ago. I'll try to find the link. This isn't going to really take off though. It's creepy and people hate ads. Carvana's use case was clever and endearing though.

cubefox

2 days ago

Well, as I said, if I type "cat", the most reasonable interpretation of that text string is a perfectly realistic cat.

If I want an "illustration" I can type in "illustration of a cat". Though of course that's still quite unspecific. There are countless possible unrealistic styles for pictures (e.g. line art, manga, oil painting, vector art etc), and the reasonable thing is that the users should specify which of these countless unrealistic styles they want, if they want one. If I just type in "cat" and the model gives me, say, a water color picture of a cat, it is highly improbable that this style happens to be actually what I wanted.

If I want a badly drawn, salad fingers inspired scrawl of a mangy cat, it should be possible. If I want a crisp, xkcd depiction of a cat, it should capture the vibe, which might be different from a stick fighters depiction of a cat, or "what would it look like if George Washington, using microsoft paint for the first time, right after stepping out of the time machine, tried to draw a cat"

I think we'll probably need a few more hardware generations before it becomes feasible to use chatgpt 5 level models with integrated image generation. The underlying language model and its capabilities, the RL regime, and compute haven't caught up to the chat models yet, although nano-banana is certainly doing something right.

minimaxir

2 days ago

I just finished my Flux 2 testing (focusing on the Pro variant here: https://replicate.com/black-forest-labs/flux-2-pro). Overall, it's a tough sell to use Flux 2 over Nano Banana for the same use cases, but even if Nano Banana didn't exist it's only an iterative improvement over Flux 1.1 Pro.

Some notes:

- Running my nuanced Nano Banana prompts though Flux 2, Flux 2 definitely has better prompt adherence than Flux 1.1, but in all cases the image quality was worse/more obviously AI generated.

- The prompting guide for Flux 2 (https://docs.bfl.ai/guides/prompting_guide_flux2) encourages JSON prompting by default, which is new for an image generation model that has the text encoder to support it. It also encourages hex color prompting, which I've verified works.

- Prompt upsampling is an option, but it's one that's pushed in the documentation (https://github.com/black-forest-labs/flux2/blob/main/docs/fl...). This does allow the model to deductively reason, e.g. if asked to generate an image of a Fibonacci implementation in Python it will fail hilariously if prompt sampling is disabled, but get somewhere if it's enabled: https://x.com/minimaxir/status/1993361220595044793

- The Flux 2 API will flag anything tangently related to IP as sensentive even at its lowest sensitivity level, which is different from Flux 1.1 API. If you enable prompt upsampling, it won't get flagged, but the results are...unexpected. https://x.com/minimaxir/status/1993365968605864010

- Costwise and generation-speed-wise, Flux 2 Pro is on par with Nano Banana, and adding an image as an input pushes the cost of Flux 2 Pro higher than Nano Banana. The cost discrepancy increases if you try to utilize the advertised multi-image reference feature.

- Testing Flux 1.1 vs. Flux 2 generations does not result in objective winners, particularly around more abstract generations.

loudmax

2 days ago

The fact that you have the possibility of running Flux locally might be enough of an argument to sway the balance for some cases. For example, if you've already set up a workflow and Google jacks up the price, or changes the API, you have no choice but to go along. If BFL does the same, you at least have the option of running locally.

minimaxir

2 days ago

Those cases imply commercial workflows that are prohibited with the open-weights model without purchasing a license.

I am curious to see how the Apache 2.0 distilled variant performs but it's still unlikely that the economics will favor it unless you have a specific niche use case: the engineering effort needed to scale up image inference for these large models isn't zero cost.

BoorishBears

a day ago

Their testing was for the Pro model, which you cannot host locally, and is already not price competitive with Google's offering for the capabilities.

echelon

a day ago

You can run Alibaba's Qwen(Edit) locally too, and the company isn't as weird with its license, weights, or training set.

I personally prefer Qwen's performance here. I'm waiting to see other folks' takes.

The Qwen folks are also a lot more transparent, spend time community building, and iterate on releases much more rapidly. In the open rather than behind closed doors.

I don't like how secretive BFL is.

vunderba

2 days ago

I've re-run my benchmark with the Flux 2 Pro model and found that in some cases the higher resolution models (I believe Flux 2 Pro handles 4k) can actually backfire on some of the tests because it'll introduce the equivalent of an almost ESRGAN style upscale which may add in unwanted additional details. (See the Constanza test in particular).

https://genai-showdown.specr.net/image-editing

minimaxir

2 days ago

That Constanza test result is baffling.

vunderba

2 days ago

Agreed - I was quite surprised. Even though its a bog-standard 1024x1024 image, the somewhat low quality nature of a TV still provides for an interesting challenge. All the BFL models (Kontext Max and Flux 2 Pro) seemed to struggle hard with it.

babaganoosh89

2 days ago

Flux 2 Dev is not IP censored

minimaxir

2 days ago

Do you have generations contradicting that? The HF repo for the open-weights Flux 2 Dev says that IP filters are in place (and imply it's a violation of the license to do as such)

EDIT: Seeing a few generations on /r/StableDiffusion generating IP from the open weights model.

542458

2 days ago

> Run FLUX.2 [dev] on GeForce RTX GPUs for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUI.

Glad to see that they're sticking with open weights.

That said, Flux 1.x was 12B params, right? So this is about 3x as large plus a 24B text encoder (unless I'm misunderstanding), so it might be a significant challenge for local use. I'll be looking forward to the distill version.

minimaxir

2 days ago

Looking at the file sizes on the open weights version (https://huggingface.co/black-forest-labs/FLUX.2-dev/tree/mai...), the 24B text encoder is 48GB, the generation model itself is 64GB, which roughly tracks with it being the 32B parameters mentioned.

Downloading over 100GB of model weights is a tough sell for the local-only hobbyists.

zamadatix

2 days ago

100 GB is less than a game download, it's actually running it that's a tough sell. That said, the linked blog post seems to say the optimized model is both smaller and greatly improved the streaming approach from system RAM, so maybe it is actually reasonably usable on a single 4090/5090 type setup (I'm not at home to test).

BadBadJellyBean

2 days ago

Never mind the download size. Who has the VRAM to run it?

pixelpoet

2 days ago

I do, 2x Strix Halo machines ready to go.

zamadatix

a day ago

(Fellow Strix Halo owner): I don't really like calling it VRAM any more than when a dGPU dynamically maps a portion of system RAM. It's really just a system with quad channel RAM speeds attached to a GPU without VRAM - nearly 2x identical in performance to using the system RAM on my 2 channel desktop instead of actual VRAM on the dGPU in the system (which is something like 20x).

That's great, and I love the little laptop for the amount of x86 perf it can pack into so little cooling, but my used Epyc box of ~the same price is usually faster for AI (despite the complete lack of video card) and able to load models 3x the size (well, before RAM prices doubled this last month) because it has modular 12 channel RAM and memory speeds this low don't really need a GPU to keep up with the matrix math. Meanwhile, Flux is already slow when it's on actual real high bandwidth dedicated GPU memory VRAM.

_ache_

2 days ago

Even a 5090 can handle that. You have to use multiple GPUs.

So the only option will be [klein] on a single GPU... maybe? Since we don't have much information.

dragonwriter

11 hours ago

> Even a 5090 can handle that. You have to use multiple GPUs.

It takes about 40GB with the fp8 version fully loaded, but ComfyUI can (at reduced speed), with enough system RAM available, partially load models in VRAM during inference and swap at need (the NVidia page linked in the BFL announcement specifically highlights NVidia working with ComfyUI to improve this existing capacity specifically to enable Flux.2) to run on systems with too little VRAM to fully load the model.

Sharlin

2 days ago

As far as I know, no open-weights image gen tech supports multi-GPU workflows except in the trivial sense that you can generate two images in parallel. The model either fits into the VRAM of a single card or it doesn’t. A 5ish-bit quantization of a 32Gw model would be usable by owners of 24GB cards, and very likely someone will create one.

crest

2 days ago

The download is a trivial onetime cost and so is storing it on a direct attached NVMe SSD. The expensive part is getting a GPU with 64GB of memory.

minimaxir

2 days ago

Text encoder is Mistral-Small-3.2-24B-Instruct-2506 (which is multimodal) as opposed to the weird choice to use CLIP and T5 in the original FLUX, so that's a good start albeit kinda big for a model intended to be open weight. BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro.

The pricing structure on the Pro variant is...weird:

> Input: We charge $0.015 for each megapixel on the input (i.e. reference images for editing)

> Output: The first megapixel is charged $0.03 and then each subsequent MP will be charged $0.015

woadwarrior01

2 days ago

> BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro.

Qwen-Image-Edit-2511 is going to be released next week. And it will be Apache 2.0 licensed. I suspect that was one of the factors in the decision to release FLUX.2 this week.

kouteiheika

2 days ago

> as opposed to the weird choice to use CLIP and T5 in the original FLUX

Yeah, CLIP here was essentially useless. You can even completely zero the weights through which the CLIP input is ingested by the model and it barely changes anything.

throwaway314155

2 days ago

> as opposed to the weird choice to use CLIP and T5 in the original FLUX

This method was used in tons of image generation models. Not saying it's superior or even a good idea, but it definitely wasn't "weird".

dragonwriter

10 hours ago

Considering how little (and sometimes negative) benefit it provided in most of them compared to just using the biggest encoder model and having a null prompt on the rest (not just those using the specific combination Flux.1 did, but for most of the multi-encoder models), its actually pretty weird that people kept doing it.

beernet

2 days ago

Nice catch. Looks like engineers tried to take care of the GTM part as well and (surprise!) messed it up. In any case, the biggest loser here is Europe once again.

xnx

2 days ago

Good to see there's some competition to Nano Banana Pro. Other players are important for keeping the price of the leaders in check.

ygouzerh

a day ago

It's nice as well for location that are banned to use private US models. Like here in Hong Kong, Google doesn't allow us to subscribe to Gemini Pro. (Same for OpenAI and Claude too actually).

mlnj

2 days ago

Also happy to see European players doing it.

Yokohiii

2 days ago

18gb 4 bit quant via diffusers. "low vram setup" :)

AmazingTurtle

2 days ago

I ran "family guy themed cyberpunk 2077 ingame screenshot, peter griffin as main character, third person view, view of character from the back" on both nano banana pro and bfl flux 2 pro. The results were staggering. The google model aligned better with the cyberpunk ingame scene, flux was too "realistic"

Yokohiii

2 days ago

i think they focus their dataset on photography. flux 1 dev one was never really great at artistic style, mostly locking you into a somewhat generic style. my little flux 2 pro testing does seem to verify that. but with lora ecosystem and enough time to fiddle flux 1 dev is probably still the best if you want creative stylistic results.

jonplackett

11 hours ago

Been trying this and found it to be fantastic. Much more naturalist images than Gemini or ChatGPT and great level of understanding.

visioninmyblood

2 days ago

The model looks good for an open source model. I want to see how these models are trained. may be they have a base model from academic datasets and quickly fine-tune with models like nano banana pro or something? That could be the game for such models. But great to see an open source model competing with the big players.

anjneymidha

2 days ago

they released a research post on how the new model's VAE was trained here: https://bfl.ai/research/representation-comparison

visioninmyblood

2 days ago

great this is more on the techincal details. it is great but would be great to see the data. I know they will not expose such information but would be great to have a visibility onto the datasets and how the data was sourced.

notrealyme123

2 days ago

> The FLUX.2 - VAE is available on HF under an Apache 2.0 license.

anyone found this? To me the link doesn't lead to the model

geooff_

2 days ago

Their published benchmarks leave a lot to be desired. I would be interested in seeing their multi-image performance vs. Nano Banana. I just finished up benchmarking Image Editing models and while Nano Banana is the clear winner for one-shot editing its not great at few-shot.

minimaxir

2 days ago

The issue with testing multi-image with Flux is that it's expensive due to its pricing scheme ($0.015 per input image for Flux 2 Pro, $0.06 per input image for Flux 2 Flex: https://bfl.ai/pricing?category=flux.2) while the cost of adding additional images is neligible in Nano Banana ($0.000387 per image).

In the case of Flux 2 Pro, adding just one image increases the total cost to be greater than a Nano Banana generation.

hermitcrab

a day ago

I spent 2 minutes on the website and I still don't know what it is. Generative AI? An image editor?

yapyap

a day ago

definitely gen. ai

bossyTeacher

2 days ago

Genuine question, does anyone use any of these text to image models regularly for non trivial tasks? I am curious to know how they get used. It literally seems like there is a new model reaching the top 3 every week

I use them to generate very niche porn

a96

a day ago

(I'm not really familiar with image generators.) Would you care to share how well that works? Given the heavy censorship attitudes, I wouldn't expect that to be easy.

DeathArrow

2 days ago

We probably won't be able to run it on regular PCs, even with a 5090. So I am curious how good the results will be using a quntized version.

shikon7

a day ago

You can run it with a 5090 and the standard ComfyUI template, it just offloads some parts to RAM. Image generation takes about a minute for sizes like 1024x1024.

echelon

2 days ago

> Launch Partners

Wow, the Krea relationship soured? These are both a16z companies and they've worked on private model development before. Krea.1 was supposed to be something to compete with Midjourney aesthetics and get away from the plastic-y Flux models with artificial skin tones, weird chins, etc.

This list of partners includes all of Krea's competitors: HiggsField (current aggregator leader), Freepik, "Open"Art, ElevenLabs (which now has an aggregator product), Leonardo.ai, Lightricks, etc. but Krea is absent. Really strange omission.

I wonder what happened.

dvrp

a day ago

They messed up. We (Krea) were also surprised.

They put our logo after we pointed it out.

Nice eye!

eric-p7

2 days ago

Yes yes very impressive.

But can it still turn my screen orange?

DeathArrow

2 days ago

If this is still a diffusion model, I wonder how well does it compare with NanoBanana.

liuliu

a day ago

There is no reason to believe Gemini Image is not diffusion model. In fact, generated result suggests it at least have VAE and very likely is a diffusion model variant. (Most likely a transfusion model).

beernet

2 days ago

Oh, looks like someone had to release something very quickly after Google came for their lunch. Their little 15 mins is over already for BFL as it seems.

whywhywhywhy

2 days ago

comparing a closed image model to an open one is like comparing a compiled closed source app to raw source code.

it's pointless to compare in pure output when one is set in stone and the other can be built upon.

beernet

2 days ago

Did you guys even check the licence? Not sure what is "open source" about that. Open weights at the very best, yet highly restrictive

gunalx

2 days ago

Yep, definetly this, They should have creds for open weigths, and bein transparent of it not being open source though. Pepole should stop being this confused when the messaging is pretty clear.

timmmmmmay

2 days ago

yeah except I can download this and run it on my computer, whereas Nano Banana is a service that Google will suddenly discontinue the instant they get bored with it