Show HN: Open-source real-time talk-to-AI wearable device for few $

103 pointsposted 9 months ago
by zq2240

16 Comments

throwaway314155

9 months ago

> In the US, about 1/5 children are hospitalized each year that don’t have a caregiver. Caregiver such as play therapists and parents’ stress can also affect children's emotions.

Trust me, large language models are not anywhere close to being able to substitute as an effective parent, therapist, or caregiver. In fact, I'd wager any attempts to do so would have mostly _negative_ effects.

I would implore you to reconsider this as a legitimate use case for your open device.

> We believe this is a complement tool and it is not intended to replace anyone.

Well which is it? Both issues you list heavily imply that your tool will serve as a de facto replacement. But then you finish by saying you don't intend to do that. So what aspects of the problems you listed will be solved as a simple "complement tool"?

echoangle

9 months ago

I don't want to criticize a cool project but why do people feel the need to create new hardware for AI things? It was the same thing with the rabbit r1. Why do I need a device that contains a screen, a microphone and a camera? I have that, it's called a smartphone. Being a bit smaller doesn't really help because I have my phone with me almost all the time anyways. So it's actually more annoying to carry the phone and the new device instead of just having the phone. I would be happy with it just being an app.

allears

9 months ago

This tool requires a paid subscription, but it doesn't say how much. The hardware is affordable, but the monthly fees may not be. Also, the hardware is only useful as long as the company's servers are up and running -- better hope they don't go out of business, get sold, etc.

xtagon

9 months ago

I highly, highly doubt we've reached the level of AI safety required to make it a good idea to replace (or even just supplement) caregivers for children. Nobody has truly solved the safety problems with AI yet, just doing the best they can--seems like a terrible idea to put that in direct intimate access of emotionally vulnerable children. We've already passed the threshold of AI suggesting to testers to commit suicide[0], and the bar has been raised to actual users being told that[1] and someone reportedly following through.[2]

[0]: https://www.artificialintelligence-news.com/news/medical-cha...

[1]: https://ainiro.io/blog/googles-ai-encouraging-people-to-comm...

[2]: https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...

jstanley

9 months ago

Personally I have found talking to AI to be much more draining than typing. It's a bit like having a phone call vs IM. I'd basically always prefer IM as long as I'm getting quick responses.

aithrowawaycomm

9 months ago

This seems to be yet another reckless and dishonest scam from yet another cohort of AI con artists. From starmoon.app:

> With a platform that supports real-time conversations safe for all ages...Our AI platform can analyse human-speech and emotion, and respond with empathy, offering supportive conversations and personalized learning assistance.

These claims are certainly false. It is not acceptable for AI hucksters to lie about their product in order to make a quick buck, regardless of how many nice words they say about emotional growth.

Do you have a single psychologist on your staff that signed off on any of this? Telling lies about commercial products will get you in trouble with regulators, and it truly seems like you deserve to get in trouble.

stavros

9 months ago

I'd love a hardware device that streamed the audio to an HTTP endpoint of my choosing, and played back whatever audio I sent. I can handle the rest myself, but the hardware side is tricky.

butterfly42069

9 months ago

I think this is great, ignore the people comparing your project to the commercial Rabbit R1 project, those people are comparing apples and oranges.

A lot of the subscription based pull ins could be replaced by networking into a machine running whisper/ollama etc anyway.

Keep up the great work I say :)

crooked-v

9 months ago

So one big question is, will the service refuse to answer when topics like sex, self harm, physical violence, drug use, or the like come up? Every bigcorp LLM tends towards the social propriety of a Victorian governess, and for plenty of people being able to talk about those things is a baseline requirement for even the blandest 'friend'.

pocketarc

9 months ago

In the 2001 movie AI, the protagonist children play with an "old" robotic teddy bear named "Teddy".

The bear's movement isn't great, and its voice sounds robotic. Projects like this make me think that Teddy either could be built with today's tech, or is very close to being buildable.

gcanyon

9 months ago

I was a solo latchkey kid from age... 5 or 6 maybe? I developed a love of reading and spent basically all my waking hours that weren't forcibly in the company of others doing that, by myself: summertime in San Diego, teenage me read 2-4 books a day. I grew up to be incredibly introverted (ironic that I work as a product manager, which strongly favors extroverts) and I wonder how differently I might have turned out if a digital companion had urged me to be more social (something my parents never did), or just interacted with me on a regular basis.

vunderba

9 months ago

I predicted a Teddy Ruxpin / AG Talking Bear driven by LLMs a while ago. My biggest fear is that the Christmas toy of the year would be a mass produced always listening device that's effectively constantly surveilling and learning about your child, courtesy of Hasbro.

deanputney

9 months ago

Is this specific hardware necessary? If I wanted to run this on a Raspberry Pi zero, for example, is that possible?

napoleongl

9 months ago

I can see something like this being used in various inspection scenarios. Instead of an inspectior having to fill out a template or fiddle with an ipad-thingie in tight situations they can talk to this, and a LLM converts it to structured data according to a template.

danielbln

9 months ago

Any plans for being able to run the entire thing locally with local models?

endofreach

9 months ago

Why not just give kids MDMA if they feel lonely?