Show HN: Fission – Offline Voice Notes with Local Llama Android (React Native)

1 pointsposted 7 hours ago
by venkada

1 Comments

venkada

7 hours ago

Hi HN,

I built this because I wanted the AI summary features mainly to summarize the action items discussed in a meeting, one on one convo or talking to yourself. The main logic here is that everything works offline.

Fission runs entirely on-device:

Transcription: Uses Vosk for offline STT.

Summarization: Runs a quantized Llama model locally to summarize the transcript. Uses Qwen 0.6b llm model.

Stack: React Native / Expo.

It’s open source (GPLv3). I’d love feedback on how the inference performs on older devices. I'm currently working on better options for transcribing.