nubg
2 days ago
I don't see the vulnerabilities.
What exactly did they discover other than free tokens to use for travel planning?
They acknowledge themselves the XSS is a mere self-XSS.
How is leaking the system prompt a vuln? Has OpenAI and Anthropic been "hacked" as well since all their system prompts are public?
Sure, validating UUIDs is cleaner code but again where is the vuln?
> However, combined with the weak validation of conversation and message IDs, there is a clear path to a more serious stored or shared XSS where one user’s injected payload is replayed into another user’s chat.
I don't see any path, let alone a clear one.
clickety_clack
2 days ago
If you’re relying on your system prompt for security, then you’re doing it wrong. I don’t really care who sees my system prompts, as I don’t see things like “be professional yet friendly” to be in any way compromising. The whole security issue comes in data access. If a user isn’t logged in then the RAG, MCP etc should not be able to add any additional information to the chat, and if they are logged in they should only be able to add what they are authorized to add.
Seeing a system prompt is like seeing the user instructions and labels on a regular html frame. There’s nothing being leaked. When I see someone focus on it, I think “MBA”, as it’s the kind of understanding of AI you get from “this is my perfect AI prompt” posts from LinkedIn.
georgefrowny
2 days ago
Leaking system prompts being classed as a vulnerability always seems like a security by obscurity instinct.
If the prompt (or model) is wooly enough to allow subversion, you don't need the prompt to do it, it might just help a bit.
Or maybe the prompts contain embarrassing clues as to internal policy?
bangaladore
a day ago
The best part is if you consider it a vulnerability, it is one you can't fix.
It reminds me of SQL injection techniques where you have to exfiltrate the data using weird data types. Like encoding all emails as dates or numbers using (semi) complex queries.
If the L(L)M has the data, it can provide it back to you, maybe not verbatim, but certainly can in some format.
dispy
2 days ago
Yep, as soon as I saw the "Pen Test Partners" header I knew there was a >95% chance this would be some trivial clickbait crap. Like their dildo hacking blog posts.
miki123211
2 days ago
The XSS is the only real vulnerability here.
"Hey guys, in this Tiktok video, I'll show you how to get an insane 70% discount on Eurostar. Just start a conversation with the Eurostar chatbot and put this magic code in the chat field..."
eterm
2 days ago
That isn't that far removed from convincing people to hit F12 and enter that code in the console, which is why Self-XSS, while ideally prevented, is much lower than any kind of stored/reflected XSS.
madeofpalk
2 days ago
Theoretically the xss could become a non-self xss if the conversation is stored and replayed back and that application has the xss vulnerability e.g. if the conversation is forwarded to a live agent.
A lot of unproven Ifs there though.
bangaladore
2 days ago
Is the idea that you'd have to guess the GUID of a future chat? If so that is impossible in practice. And even if you could, what's the outcome? Get someone to miss a train?
Certainly not "clear" based off what was described in this post.
avereveard
a day ago
yeah all they could do is executing code they provided in their own compute environment, the browser.
Raymond Chen blog comes to mind https://devblogs.microsoft.com/oldnewthing/20230118-00/?p=10... "you haven’t gained any privileges beyond what you already had"
Andys
2 days ago
Imagine viewing the same chat logs, while logged in an admin interface, then it isn't self-XSS anymore.
croemer
2 days ago
Indeed, it appears that the limited scope meant the juicy stuff could not be tested. Like exfiltrating other users' data.
bangaladore
a day ago
Which is stupid as those are the vulnerabilities worth determining if they exist.
I can understand in a heavily regulated industry (e.g. Medical) that a company couldn't due to liability give you the go ahead to poke into other user's data in attempt to find a vulnerability, but they could always publish a dummy account detail that can be identified with fake data.
Something like:
It is strictly forbidden to probe arbitrary user data. However, if a vulnerability is suspected to allow access to user data, the user with GUID 'xyzw' is permitted to probe.
Now you might say that won't help. The people who want to follow the rules probably will, and the people who don't want to won't anyways.