asnyder
3 months ago
How does this compare to the already established open source solutions such as Chatbox (https://github.com/chatboxai/chatbox), or Lobechat (https://github.com/lobehub/lobe-chat)?
Been using both, like Chatbox for how snappy it is, but is local only, vs Lobechat which allows you to setup centralized host to have shared host across clients but feels a bit clunkier.
CryptoBanker
3 months ago
One of the biggest differences I noticed off the bat is llms includes prompt caching which I'm not sure I've seen in any other self hosted UI options
asnyder
3 months ago
I see Lobe and Chatbox both have prompt caching toggles, are you referring to something else?
asnyder
3 months ago
I've mistakenly given Chatbox a new feature, sorry :). In LobeChat, after you select a particular model, it enables a mini-settings menu next to the model that lets you set caching, deep thinking, and thinking token consumption.
CryptoBanker
3 months ago
Ah that must be new since the last time I tried lobechat
CryptoBanker
3 months ago
Where do you see that? I can't seem to find it in the web or desktop apps for lobechat.
EDIT: I also don't see it in Chatbox