Tistron
a day ago
a day ago
a day ago
The main point raised in the article is that these bots may void attorney client privileges.
But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
a day ago
Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
a day ago
We've found they're surprisingly good if everyone on the call is using a decent headset.
The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.
a day ago
> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.
IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.
a day ago
I don't think it's a training issue, it's simply that there's no inherent "I don't know" in the transformer architecture unless it's really like something completely unknown, otherwise the nearest neighbor will be chosen and that will be whatever sounds similar or is relevant, even if it might cause a problem
a day ago
The final output of the neural network part of an LLM is a vector with weights for every token, that is then usually softmaxed and picked from. Can we not quantify the uncertainty by looking at the distribution of weights of the top 10 options? Like we expect for a note-taking app that the top choice would be something like 98% certain, and if we see that the model gives a weight of 60% to "Russia" and 30% to "France", that's just not enough certainty to simply output "Russia". That's exactly when it should say "<uncertain>" or something instead.
a day ago
I’ve looked at confidence outputs for the chosen words from several STT providers and it’s definitely so that low confidence indicate that there is a risk that it has misheard.
Not always though. Let’s say that someone is saying ”1 2 3 4 <unintelligible> 6 7 8” then it will happily write 5 in the middle and give it good confidence as based on the context, it is the only likely word. Varies between TTS providers though.
Basically, why they are so good in average is that they estimate what is said most often based on the context. The context being then not only the audio but what was transcribed previously.
And if you don’t want it to be based on what is most likely to be said in context and only based on the audio around 1 word it is going to be awfully wrong most of the time.
a day ago
It seems like the problem in this application is that attention itself. Makes me wonder if using a transformer for transcription is the correct architecture.
13 hours ago
I think it might break the game. Most words sound similar enough to other words. "cat" and "get", "he simply" and "his simply", etc.
Add accents, and half the words would be indistinguishable from each other (note that word "indistinguishable", ironically, would be quite distinguishable).
People parse things like that in so much context, based in their own understanding of a situation, their grasp on speakers accent or speech impairments, etc.
Add to that that most native english speakers blur words together. The pause that in some languages is used to separate words, is used in english to separate sentences. English language as spoken doesn't separate words natively.
The text-to-speech before LLMs was meh. I think it's the ability to generate filler for uncertain words that makes it feel magic compared to before.
a day ago
Unfortunately, that likely just doesn't exist. Everything suggests that these models are confident about their mistakes.
a day ago
I mean, what I describe absolutely does exist, that's how LLMs work. The question is whether the relative weights are actually a good measure of confidence, and as the other reply to my comment points out, there are examples where it's not -- at least not the kind of "confidence" we really want.
a day ago
Not inherent in transformer architecture, we do try to ingrain a sense of uncertainty but it’s difficult not only technically but also philosophically/culturally. How confident do you want the model to be in its answer to “why did Rome fall”?
Lots of tools in our toolbelts to do better uncertainty calibration but it trades off against other capabilities and actually can be rather frustrating to interact with in agentic contexts since it will constantly need input from you or otherwise be indecisive and overly cautious. It’s not technically a limitation of transformer architecture but it is more challenging to deal with than other architectures/statistical paradigms.
Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant. But evals reward guessing at this point, and it’s very very hard to evaluate the calibration in these open ended contexts. But we’re slowly getting there, just not nearly as fast as other capabilities.
a day ago
>How confident do you want the model to be in its answer to “why did Rome fall”?
The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.
a day ago
> > Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant
> ways to start a sentence, which LLMs never use
A huge part of the problem is we've invented a document-generator setup which exploits human cognitive illusions, and even the smartest person can't constantly override the instinctive brain-bits that "sees" fictional entities and infers the intent of a mind. That makes it weirdly-hard to discuss the setup's shortfalls or how to improve it.
To wit: The machine does not possess any kind of confidence about how Rome fell. Or even whether Rome fell. It has "confidence" about which word/token will next in a "typical" document given the document-so-far has text like "How did Rome fall?" It may be straightforward to burn money training the system so that its "typical" story never has a computer-character with confident words about Roman history, but that's just papering over the underlying problem.
TLDR: We can't fix the thinking-habits or beliefs inside the mind of an entity that doesn't actually exist. Changing the story-generator to contain a tee-totaling Dracula dispensing life-advice doesn't mean we "cured the disease of vampirism."
a day ago
IIRC people actually measured it, and one of the things RLHF does is to turn the fairly well-calibrated probability judgments of the raw predictive model into an essentially binary and much more inaccurate “definitely” / “no idea, coin toss”, the former member of the pair being of course much more frequent. The architecture is perfectly capable of uncertainty, it’s the humans that hate it and sand the capability off until the result fits their preconceptions.
(Which is intensely depressing to a human that doesn’t.)
a day ago
I feel like if you trained better for "I don't know", it would drag down competence everywhere else somehow. Like, the strength of a model is exactly it's ability to grasp at straws and very often find the right one.
If you ask a good model something that makes no sense, it will tell you it makes no sense and it can't answer the question; so I know it's possible.
a day ago
Surely they could be built to pit placeholders for low confidence predictions and ignore those bits when predicting the rest?
The reason AI companies won’t do this of course is it would completely ruin the illusion of confident confidence these machines project.
a day ago
The thing is, if LLMs are stochastic parrots predicting the next word (aka, a partially decent auto complete), there's no reason it can't complete <specific question it can't answer> as "I don't know" - as that's a perfectly valid sentence too.
That's why I'm still cautiously optimistic about LLMs somewhere being good enough. I don't know if or when someone will manage to do it, but I'm hopeful.
a day ago
It's just a token predictor what do you expect? What we need are tools that embrace that and ping the agent to validate what it just said or double check. But the trade off is that this might hamper their capabilities to some level
a day ago
> It's just a token predictor what do you expect?
The point isn't that it's unexpected. It's that prior text-to-speech systems were much better about this particular failure mode, prone to spitting out entirely incorrect words but not rephrasing entire sentences.
This is a particularly bad failure mode because people don't notice it.
> What we need are tools that embrace that and ping the agent to validate what it just said or double check.
This is not a problem that can be fixed by throwing more AI at it. It's a shared problem to all such systems, whether they're audio-text transformers or LLMs. Agentic review would just further push the system towards creating output that looks correct, but is not.
LLM translation does the same, yielding more natural text, but generally not better translation. In several cases, especially the "easy" translation between similar languages (e.g. within a language group like Germanic or Nordic) LLM-powered translation is notably worse than more primitive "word & phrase book" systems, tending to change the meaning of the text in order to have good grammar whereas these older systems would give crude or grammatically incorrect translations that still retained the core meaning.
a day ago
I often (ish) translate between English and German, two languages I speak very well. The quality of translation is amazing and far better than what old systems did.
Maybe it depends on topics or length, for me it's usually 1-2 paragraphs of a German article to share online.
13 hours ago
> The quality of translation is amazing and far better than what old systems did.
Are you native in both languages? If you are only native in one of them, it would be insightful to find if people with your skillset but native in the language you are not have the same opinion as you.
a day ago
> Maybe it depends on topics or length, for me it's usually 1-2 paragraphs of a German article to share online.
Same languages, same use case. My experience is different. On both google translate and others. ¯\_(ツ)_/¯
19 hours ago
Haven’t used google translate in a long time, mostly because of quality issues before LLMs. Deepl was leading for a while, nowadays I’m very happy with Kagi translate.
a day ago
Older ML systems were much better at exposing their internal confidence. Plenty of papers reverse out this kind of interpretability for open weight models. All the models exposed logprobs early on. This seems solvable if prioritized. The unintelligible words should be lower confidence. Getting per-token data for the output that aids with understanding the predictions is entirely feasible as engineering effort - it just won't be enough to address all the problems - but it should help quite a bit.
a day ago
While you're correct in what tthe audio models are - at least somewhat (they're not exactly like text based llms), you seem to brush his point away too quickly before fully exploring it.
This is a solvable issue, the current model and harnesses just aren't made with that assumption - hence they're doing "best effort while guessing if unsure".
Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.
Currently there is basically only one mode - and it's optimized for conversation. The note taking is just glued on with that functionality as the backbone, and that's probably not going to stay.
a day ago
> Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.
I'm hesitant to admit even that. Like any computational linguistics problem, accuracy relies on coverages of all levels: form morphology, through syntax and semantics to speech act and world knowledge.
I worked with state of art speech recognition in healthcare setting. The model was specifically trained on small set of languages with emphasis on covering medical terminology.
It worked great for conversations most of the time, but sometimes messed up very badly. For instance when patient would mention the name of a relative, a street address or phone number. Spelling out an email address would mess it up completely.
It's just like when you're a horrible typist and rely on spell checking: The red squibles are gone, but the story no longer makes sense. Or when you "autofix" a syntax error, but the meaning diverges from your intention.
As the technology improved the number of words decreases, but the mistakes get more severe.
a day ago
> what do you expect?
If the prediction strength is below X, put an indicator that it couldn't make a valid prediction?
a day ago
>It's just a token predictor what do you expect?
Someone tell Altman
a day ago
It's a benchmark and eval issue. Guessing gets them the right result sometimes and the models rank better in error rate than they'd otherwise. We need the kind of benchmarks that penalize being wrong WAY more than saying "I don't know".
Of course there's a secondary problem that the model may then overuse the unintelligible option, but that's something that's a matter of training them properly against that eval.
You could also try thresholding the output based on perplexity to remove the parts that the model is less sure about, but that's not going to be super accurate I think.
10 hours ago
Benchmarking for giving I don't know rather than wrong answer seems to be the right way to steer industry towards making models that are good at this. AA-Omniscience is one such benchmark.
AA-Omniscience is a knowledge and hallucination benchmark that rewards accuracy, punishes bad guesses and provides a comprehensive view of which models produce factually reliable outputs across different domains. The benchmark contains 6,000 questions across 6 major domains, derived from authoritative academic and industry sources and generated automatically using an LLM-based question generation agent to ensure unambiguity, scalability and factual precision
a day ago
Yeah I broadly agree with you. I've tried by explicitly adding a prompt to "ask questions and clarify", and even fairly decent models like Gemini pro (2.5 or 3) tend to make question for the sake of it.
Which reminds me that that's another big issue with LLMs - they'll blindly do whatever you ask them to, without pushback. (Again, I miss 3.5/3.6 era Sonnet which actually had half a spine. Fuck anthropic for blindly chasing coding benchmarks at the cost of everything else.)
I've engaged in several "CMVs" (or "tell me why X is bad") with LLMs, and very often it's clear it's just saying stuff to say it, giving very terrible points on unjustifiable positions that collapse the moment I counter argue even slightly rationally.
a day ago
Their quality for different language accents also significantly varies.
Got a team with Indian, Chinese, Texan, British, and Australian? Your A.I.-powered translation tool is going to get 80% of your conversation wrong.
16 hours ago
I have done many transcriptions of messy meeting recordings with thic euro-english accents, and a local Whisper large handled them near perfect.
a day ago
> headset
Half- vs. full duplex. Headphones is all you really need, though of course a directional mic and/or one closer to your mouth will yield a clearer audio recording as well.
a day ago
>If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
Isn't that what people do?
a day ago
For in-person conversations to keep the conversation flowing, sure, but any good transcription will say [unintelligible] when the scribe couldn't tell despite being able to listen over it again and again.
Nixon tapes for example: https://kagi.com/search?q=site%3Anixonlibrary.gov+%22unintel...
a day ago
Recent example:
- the person said 8 to 10
- LLM transcribed as 18
Granted, the person had a foreign accent and didn't enunciate very clearly. But I knew they meant 8-10 if for no other reason than 18 didn't make sense given the context. But the AI isn't smart enough, and then 18 goes into the record.
a day ago
My workflow uses krisp.ai for taking a transcript, and then I have a dedicated project in Claude. I feed it the transcript and ask for it to give me a summary in a specific format I define, with good front matter, etc., and it needs to spit that out in a way that I auto-import into Obsidian.
But key in my prompt is asking 1) for it to flag any low confidence or context-nonsensical statements in the transcript, with the timestamp, so then I can listen to the original audio and either clarify, correct, or say "I couldn't understand that either, here's my best guess and mark it low confidence", then 2) which I see as critical: Claude also is told to create a "context" document that it maintains based on my answers, so it starts to gather ASR things like "transcript commonly hears A B and C as variants of name X", who is who, internal product and project names and context info on them. 3) Claude is told specifically to read this prior to summarizing the transcript, and to consult it as it is doing so, and to ask me on anything it's not confident on.
What is then starting to get quite powerful for me is moving from full text search of my meeting notes in Obsidian (I'm a PM in a lot of meetings), but I can point Cowork to the Obsidian notes folder (because they're all Markdown) and start doing rich "querying" of it. "When did [stakeholder] first mention [feature] as a release blocker?" and it can point to the meeting.
My system works well, and I've done a bit to fine tune the automation and friction reduction, and it's a bit easier to manage because I'm not generally creating summaries for broader consumption but as my second brain (I have a separate prompt that utilizes some of that "knowledge" to build those).
One thing I've found helpful with this is moving the summarization itself into something with "context/memory". Krisp is capable of generating summaries but can't/doesn't review prior transcripts. Its role is just "give me the transcript as you heard it".
a day ago
"This technology works as long as you're not a pleb"
a day ago
> The problems start when using conference room audio
RTO problems
a day ago
Verbatim transcriptions are usually very good. Because even the ocasional "can/can't" replacement is usually obvious within the context of the full conversation.
But the summarization feature is where the most ridiculous errors and omissions happens.
a day ago
Ok zoom the default summary template is often lacking and incoherent, but switching it to the lengthy one “Discussion” works great. I think the default only works for single topic meetings where is rare.
a day ago
Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
a day ago
That's presumably great for legal exposure because it increases deniability
a day ago
I wonder what kind of GDPR implications that has given the requirements around the accuracy of personal data.
a day ago
The AI note summaries in meetings I'm in are frequently totally inaccurate. They are actually inaccurate in two ways: they fabricate things that were never said (but always kind of close to something that was said), and they emphasize the totally wrong thing (e.g. acting like the entire conversation was about one topic when that was just a very small part).
I sincerely hope these aren't used in court.
a day ago
They will be discovered and used in litigation, and the results will be hilarious. Think about how much lawyers pick apart language (like statutes or the constitution) that was written deliberately by humans and subject to review and revision. Now we're going to have lawyers, e.g., seizing on word choice in AI notes that might have a sinister connotation when the original wording was innocuous.
21 hours ago
> seizing on word choice in AI notes that might have a sinister connotation
Ironic use of “sinister” when you probably mean “nefarious” and don't mean to perpetuate silly old superstitions about “left-handed” people being evil :p
21 hours ago
> word choice in AI notes that might have a sinister connotation
Potentially sinister due to the biases of the model, as the model may have been trained using internet content that has a lot more fictional titillating evil overlord board meetings than the actual mind-numbing real thing. Training that included extremist anti-corporate dogma might even bias the language models towards hallucinating the worst possible misinterpretation.
I've seen whisper hallucinate whole legal arguments whole cloth when the AGC was broken in it and the audio went quiet-- so I think the language models in it are more than powerful enough to politically load a transcript.
Good practice should be to minimize any unnecessary stored records because ANY record just means more processing costs in discovery and god knows how much extra cost in litigation should it happen to have an unfavorable interpretation in light of some impossible to anticipate future litigation.
But if AI transcription must be used it would be might be prudent to save a copy of the original audio along with it.
a day ago
This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
a day ago
What are you trying to get away with I wonder?
a day ago
Chillax Palantir, your pro-surveillance throwaway incidentally makes such large data harvesting companies a larger target.
Limiting data retention doesn't mean hiding bad things, it means limiting exposure in general. The more of a thing - anything - that you have, the bigger a target you are to bad actors. By extension, companies holding vast sums of data beyond what's needed to process a given transaction or remain compliant with the law end up placing themselves at risk of being targeted and said data used as leverage against them.
You don't limit data to hide bad shit you're doing, you limit it to avoid others using it to do bad shit against you or your customers. If someone or something is engaged in bad shit, there will always be evidence somewhere regardless of data retention policies.
a day ago
Probably nothing, he's just not naive. You would have to have the intelligence of a small child to legitimately believe that authorities are only ever acting in benevolence, never with ulterior motives, and that they can never make mistakes. It's a matter of risk analysis here; we want to minimize the risk of shit going wrong.
a day ago
Never write if you can speak; never speak if you can nod; never nod if you can wink. -Lomasney (has aged well it seems)
a day ago
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
I would add that their is no guarantee their are correct as well.
a day ago
You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.
“At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.
a day ago
How can ignorance of the law not be a valid defense while intentionally not recording known illegal activity be a effective defense?
a day ago
There is no duty to record everything everyone does. No one is legally compelled to record their actions except for a few rare situations...
17 hours ago
Exactly. “Don’t write anything down you wouldn’t want to see in the newspaper” just became “Don’t say anything in your meetings or 1:1s that you wouldn’t want to see in the newspaper.”
I’m overall an AI optimist but this is going to blow up in people’s faces very quickly. (I would explain this to my manager but he has AI note taking turned on in all his meetings!)
And that’s not even getting into the use of it for sensitive clinical notes in eg. mental health…
a day ago
The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
a day ago
Is it reasonable to expect any call going through a computer to be off the record, even without AI? Recordings were always discoverable, the only difference is that a paralegal doesn't need to manually go through every recording to determine if it's relevant.
17 hours ago
If you are on a call, it's already potentally recorded and transcribed. This just makes it so you also have that capability.
a day ago
Not only there
Also social settings will change, when everything you say stays on record forever in every meeting...
a day ago
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
The parts that aren’t privileged. On the other hand, perhaps the truth-seeking function of the justice system will be better equipped than before when we had to rely on (more) faulty human recollection.
a day ago
No idea why this was downvoted? That’s what the justices system tries to do.. find the truth. Missing or obfuscated evidence works against that.
a day ago
my fear exactly. same with something like Meta glasses. and i feel like we have moved quickly from the regulatory problems to "'tis a fact of life"
a day ago
Seems like something that will add to their billable hours
a day ago
Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.
a day ago
No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).
I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.
The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).
Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].
[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.
a day ago
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him" - Cardinal Richelieu
Be careful what you wish for. Particularly when it involves tech that often gets it very, very wrong.
a day ago
Be real. Worry here by these layers and companies is not a false accusation.
It is true accusation and potential for success of it.
a day ago
That's an argument for recording everyone on earth 24/7. Is that what you mean?
a day ago
With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
a day ago
No there's the potential of that happening. That isn't what actually happens. If everyone's phone was continuously recording and storing everything 24/7 we'd need much bigger batteries for one thing.
a day ago
I wish that we would all insist that it starts at the highest level down, rather than the other way round. Maybe also if you look at information on me, I get notified and get to look at the information on you. Unfortunately surveillance is a one way street.
a day ago
It'll just happen. Can't really fight technological progress.
a day ago
Actually, many people fight this kind of "progress". Just look at what is happening to Flock right now. True "technological progress" would be using technology to empower humans, not to exploit and subjugate them.
a day ago
Is it progress though?
a day ago
Smaller, more capable, cheaper? Yes, it absolutely is progress.
The only question is whether everyone gets a slice, or it ends up locked down so only governments and corporations have access to it. Obivously I come down on the sousveillence side of the fence - it's the lesser of two evils. If it exists I want everyone to have it, and you can't stop it existing.
a day ago
Show me man the man and I will show you the crime.
Modernized. Industrial AI scale.
a day ago
Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
a day ago
>casually discussing a feature you're stuck working on that you think is a bad idea.
I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.
a day ago
The people supplying this technology explicitly want it to tell them what their serf are doing. There will be no "honest but anonymous informing of upper management".
a day ago
That information is often intentionally not cascaded up the chain because the higher up you go, the more rigid the thinking gets - at least in my experience. Upstream doesn't want to hear the bad news or hear about how their idea is dumb. They want us to just do the bad idea and if the bad idea doesn't work out, they want to hang the ICs out to dry.
Maybe some smaller shops are not like this, but the bigger your company is, the more you'll find this type of thinking to persist.
In theory, I do like your idea - anonymously cascading feedback upstream. I just see no avenue for this to succeed in practice.
a day ago
Or even completely legal things that a majority of people agree with, ex:
"It seems that starting in 2025 one of your employees began spreading many bigly unfair and hateful lies about Dear Leader in team meetings. We at the Department of Truth would hate to see your operating license revoked for encouraging such unpatriotic behavior..."
a day ago
Or indeed, people with ocd and compulsive thoughts will be charged for things they never did.
a day ago
Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.
Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.
The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.
---
I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)
a day ago
> Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.
This is horrifying. Why do you feel the necessity to self-censor? What consequences do you anticipate?
a day ago
Not the person who said this, but it's easy: Grow up in a country where the government listening in is common (without any transparent due process), and it becomes second nature.
And then add to that how easy it is to record phone conversations with today's phones (I've done it), it's easier on the brain to assume it's being recorded as opposed to wondering if it is.
But yes, I don't care about my dirty jokes being recorded :-) Illegal activity? Sure. But I solve that problem by not doing illegal things.
a day ago
Not necessarily. Snowden has proven the US intelligence agencies can basically intercept anyone's electronic communications, and that they do this is a giant scale.
And yet, we still see comments of people horrified when you said you assume as default that someone (or more likley, something) is listening.
The normalcy bias is too strong.
a day ago
I'm not horrified that they think someone is listening, the horror was from them self censoring out of fear of surveillance.
21 hours ago
I apologize for mis-reading you. but, being careful with this stuff can't hurt. Well, yes, it can hurt the society, but keep you safe from the powers that be.
a day ago
It's a good policy, generally. Treat anything written down, email, etc, as something that could become public later. Anything that could be recorded and saved for later can be used against you if it's taken the wrong way. A questionable joke could become an HR complaint, as an example.
a day ago
Consider Person A, who is self-interested and may use a secret recording against you, even if you're following all rules and acting ethically. Additionally, consider if Person A is neutral but shares the recording with Person B, who unbeknownst to Person A, is actually out to get you.
a day ago
"Anticipate" is strong. Even the mere possibility of something bad happening, even if unlikely, is probably enough to outweigh the positives. In this case, that's just "saying things of grey morality in recorded settings."
My primary worry would be things being taken out of context when circumstances change later. Maybe there's lawsuit discovery, or maybe you have a falling out with a coworker who tries to defame you. The last thing you want is a motivated adversary to be given access to a wide trove of things that could be reframed to be used against you.
And it's not just off-color jokes or insensitive comments. At one job, we had a project that involved "fudging" billable numbers for a completely legitimate purpose. I was the one insisting that we don't use that term in writing at all to avoid any potential future misunderstandings. Call it an "adjustment" or "algorithmic modification" or something, but not "fudging" or "fabricating." Same kind of reasoning.
a day ago
Why bother, if the bad actors was that good it could just make up the conversation using your voice and AI.
19 hours ago
Because the vast majority of things I self-censor I'm very careful about where and when I discuss it...
And I make a point of seeing the people I care about face to face.
a day ago
Adding on to this question, do you anticipate the same people capable of tapping phones to think less of you for a dirty joke? The people whose opinion of me would lower for something off-color and the people who possess the ability to wiretap me are a disjoint set lol.
a day ago
The point is it would be usable against you, but at the point you are wiretapped and can be used in court all bets are off you are probably in too much trouble may as well tell the joke!
a day ago
yeah exactly haha; the threat model implies a level of "you're hosed" that something private I'd say to a friend isn't moving the needle on.
a day ago
Have you missed the last decade and a half of people having their lives ruined by social media mobs for minor slights?
a day ago
By their friends or government agents recording telephone conversations where they say an off-color joke?
a day ago
> I'm extra careful about dirty jokes or "grey morality" in video conferences and email.
That's a good policy when interacting with any human or device in a work context.
a day ago
When I say something inappropriate on the phone or over Discord, I always tell the NSA guy listening it was a joke.
a day ago
This is an old joke, but as AI becomes more and more capable it becomes increasingly likely that the government will have some kind of intelligence sifting though every post you make and message you send.
10 hours ago
And we can guess that recordings for have stored for years back before it got feasible to actually process.
a day ago
the private sector is already doing it (based on the puppetted dead people profiles of facebook, and people being banned on discord for messages they sent a few years ago). given that data is quite easily handed over from private corporations to public goverment, id go a step further and say AI profiling is probably already happening at a certain level of governance. can you imagine the dystopia of when this is properly linked together? the system will be able to completely delete independant action from its limbs. "officer XYZ accessed nondeterministic AI summary of the perp's facebook messages based on an involuntary facial scan. AI summary stated the perp to have presented anti <current regime> views. officer XYZ made the decision to shoot on sight, based on heightened danger presented by the AI assessment. no further investigation required, system functioned correctly, case closed"
a day ago
> Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family.
Adding to that: If you live in a one-party consent state assume you're being recorded by any of the parties in a face-to-face conversation, too.
Yeah-- it sucks that the world is this way. I deal with it. What I don't want to see are draconian controls on technology (which will ultimately be ineffective) in an attempt to put the genie back in the bottle.
a day ago
> If you live in a one-party consent state assume you're being recorded by any of the parties in a face-to-face conversation, too.
Even if you don't live in a one-party consent state, assume you're being (perhaps illegally) recorded by any of the parties in a face-to-face conversation too.
a day ago
It just seems so unhealthy to live in a low-trust society. Is there a way out?
Maybe AI agents that work in our personal best interests?
a day ago
I think tools like this are needed but the current default of "random SaaS bot joins every call and uploads the whole thing somewhere" feels like the worst possible version of it.
The whole point should be that the people in the meeting can actually focus on the conversation rather than half listening while trying to write down enough notes to remember it later. If you're paying people good money to be in a meeting, having them spend half of it doing low quality note taking is a bit mad.
This is basically the reason I've been building Whistle Enterprise (https://whistle-enterprise.com). I'd much rather have something where I choose to record a meeting, process it locally, generate the document and then decide what to keep or delete. I mean yea, it still creates a record so it doesn't solve the legal / discovery side, but at least you're not also adding a random third party into the middle of every conversation.
a day ago
100% I picked up shorthand to take notes quicker, but I'm still half-in/half-out of the conversation, mentally deciding what's important enough to jot down. This sounds like a great solution.
a day ago
How many times have you guys been in a meeting, not realized someone turned on AI notes, and said things you would rather have not been recorded and auto-emailed to everyone after the meeting? For me it's happened more than a few times and while maybe there's a silver lining of people hearing the unvarnished truth, I think it's going to change the dynamics of meetings, in that people are not talking honestly anymore, you have to put on a show for the AI note taker.
a day ago
It's not just work meetings. This is being taken to healthcare settings, also.
20 hours ago
That's fine with me actually. I recently had an appointment where the doctor asked if he could use AI notes, I said yes, and meeting was so much nicer than before when the doctor is typing half the time. We can have a real conversation, nothing I say is missed, and later on the doctor can use AI to cross reference results or whatever else with our past discussions. That's win-win all around.
18 hours ago
The cases I've heard of where AI doctor notes went badly, is in recording numbers that another doctor relies upon. If it's just a casual conversation, it's probably okay. If it's about cancer treatments, maybe ask for manual notes.
I've had doctors write horribly-incorrect notes in my digital chart after appointments, so please don't think I'm trying to be a Luddite.
a day ago
Somewhat off-topic: I've spent thousands of pounds on legal advice which has ranged from poor to mediocre. I found that most solicitors would refrain from giving proper advice and are there only to be instructed. You have to do your own homework, read the law, the case-law, prepare notes and documents, etc. With the rise of LLMs I found it easier to do all these things and come prepared to these meetings, or even do some of the solicitor's work on your own. For example I've found Gemini 3 to be exceptional at reasoning on the legal side - to the extent where I was able to explore and reason about very thorny topics from all sides.
I found the legal profession to be a prime candidate to disruption using LLMs, especially the initial consultation phase (Do I have a claim?). One of the things that's protecting the status-quo, for the time being, is the law - for example in the UK you can't actually offer any sort of legal services without being SRA-accredited. There's also lots of secrecy within the profession, and lots of procedural tricks that lay people are not aware of. AI could make all of these more accessible for the lay person.
17 hours ago
IANAL
How can AI learn the secret procedural tricks? Where's the training data?
And is there a legal consequence for AI giving bad/incorrect legal advice? Can they get disbarred?
Can you be sure the AI tool even read an entire piece of legislation (inb4 "you can't with lawyers either" : I thought we're aiming for better)?
How will they understand the inner workings of courts, CPS etc? How will they network with other lawyers for advice and learning (how will the LLM train on that)?
a day ago
AI meeting notes are not transcripts. While they do cause an unprecedented amount of record creation (as the article notes), there are also challenges that a defense can use. Note takers get small details wrong all the time, they often are making notes FOR someone so it biases what is documented, their prompting is opaque, and they can't be cross examined. We will likely see situations where the note taker and a witness participating in the meeting disagree?
a day ago
Some companies want no records at all, see:
"2028 – A Dystopian Story By Jack Ganssle":
http://www.ganssle.com/articles/2028adystopianstory.htm
Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.
From Cornel Law:
LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery
Rule 26. Duty to Disclose; General Provisions Governing Discovery
(a) Required Disclosures.
(1) Initial Disclosure.
(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:
(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;
(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …
a day ago
Much of my experience with corporate counsel is one of 2 extremes: "keep everything"[1] or "keep nothing". Keep everything, because then you can't be caught out deleting something possibly relevant, which looks very, very bad in court. Keep nothing, because then opposing counsel can't catch you out only keeping things that make you look good in court.
[1] There's actually a subset of this, which includes "...until you are legally allowed to delete it, then delete everything". This is driven by regulation (e.g. SOX in the US).
a day ago
This was interesting and sent me down a research hole.
General conclusion:
Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.
a day ago
"2028 – A Dystopian Story By Jack Ganssle"
If Mark Z was exactly himself but not successful and filled with resentment, he would write something like this. The smugness, the egotism of that story. It's so obvious that engineer types, like almost all middle class variations, are part of the problem and somehow think they are the solution. Bleh.
a day ago
See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.
a day ago
The main difference between a transcription error and a summarization error is that what was actually said may not get transcribed correctly, but you can always go back to the audio to check. Summarization errors are different because the narrative may sound coherent on the surface but doesn't necessarily represent what actually happened. A coherent summary that isn't accurate may be accepted as fact when in reality it is not. Only if the actual audio is checked would the discrepancies be found.
A lawyer may just accept it, believing the summary accurately represents the transcription. When AI summarizes a meeting, it does not catch the nuances of what actually happened. An offhand or dissenting comment may be critical but not caught in the summary. The AI compresses but can easily miss important details that matter. The consequences of only using the AI summary are potentially catastrophic. Context could be easily misunderstood, critical details may be left out, things that weren't actually said could be accepted as fact.
a day ago
Honest question:
Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?
I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.
a day ago
Most services have privacy policies that boil down to:
- we promise not to share PII (defined as narrowly as possible)
- we promise not to share payment information except with our payment system
- if you pay us, we promise not to train LLMs on your data
- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".
15 hours ago
OK. But those can be "Zukerberg" promises?
a day ago
> I am totally baffled by the trust people put on these systems
The average person doesn't care about online privacy.
a day ago
They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
a day ago
When the average person thinks about "online privacy" they think about keeping things private from other people. They don't think about keeping their data private from the companies hosting/processing their data.
a day ago
You are making a lot of assumptions about the "average" person. Here's a Pew Research study that says otherwise:
https://www.pewresearch.org/internet/2019/11/15/americans-an...
a day ago
That research study also concludes that “59% of people have no understanding of what companies do with the data they collect”
To me, that says the average person doesn’t care enough (they care, just not enough) to do anything about it.
They might care enough to spend 5 seconds signing a petition, but not enough to spend 5 minutes installing an ad blocker, and definitely not enough to spend 5 hours doing anything more extreme like de-googling their life.
a day ago
If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
15 hours ago
Ok, thank you.
a day ago
This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.
Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.
OpenAI's realtime whisper and other such models will become the default over time.
a day ago
Why are phone calls or SMS under attorney-client privilege[1] but AI transcripts of a call with an attorney isn't? I was under the impression there isn't any expectation of privacy from phone carriers either. AI's recording of a conversation is actually more like a fourth-party, no?
[1] https://www.findlaw.com/criminal/criminal-legal-help/what-ar...
a day ago
a day ago
I know this is also a legal question, but I sometimes wonder about ML-based meeting transcription and summary services. Pre-ML, if you wanted to record a meeting, the de-facto process (at least in the US) was to announce before the meeting that it was going to be recorded, and then announce once you started recording that it was being recorded. Now that we're in an ML world, the default seems to be transcription and summarization is turned on and none of the meeting attendees are asked and most do not have the ability to turn it off.
That feels hinky to me...
a day ago
It certainly has a chilling effect. Meetings where there are robust discussions may become a thing of the past.
a day ago
I built https://getminute.me to help solve this issue, transcription, summaries, chapters all done on device using local AI models for Mac OS and iOS.
Seems to have had a good reception so far within the legal world who was my original target market for this.
Is it as powerful as the services using insane compute? No. Does it do a pretty decent job without using a third party? Yes.
a day ago
In business communication of every kind, voice, text, emails, paper sticky-notes, doesn't matter. Assume someone is sharing your information with someone that you don't know is sharing it. Communicate as if you have an audience at all times. Also, I am not a lawyer, but the error rates are too high for anything transcribed to stand up in court with a good lawyer, unless corroborated by human witnesses.
a day ago
you are talking management level intelligence to a very Individual Contributor crowd. People will nuke every opportunity presented to them in their life in defense of NDAs that don't matter - better called Not Doing Anythings.
a day ago
>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.
Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.
As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.
Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.
a day ago
It sounds like the prison recordings were compulsory, which is a different kettle-of-fish. The key phrase "if they share" implies voluntary and deliberate action, and is not much of an oversimplification imo.
> What matters is that the parties intended and expected communications to be privileged.
I would contend that your summary, not theirs, is a oversimplification. Jurisdictions will obviously differ, but privilege does not attach merely because of the intent and beliefs of the lawyer and client.
a day ago
Well, I try to avoid the R word. The actual legal term would be reasonable intention. Literal expressed intention. ie putting an A-C warning on every email, won't be enough on its own.
IMHO we should just assume the R word before every verb in every legal discussion. That is how reality works. These are not spells. If I express that I intend something to be private, then announce it using a megaphone at a basketball game, my intention is no longer reasonable regardless of what magic words I have thrown into my communication. Act like an idiot and a court will treat you like an idiot.
a day ago
Alternative to archive.is
No Javascript, no CAPTCHA, no geoblocking, no DDoS directed at blog
https://static.nytimes.com/narrated-articles/synthetic/artic...
a day ago
Alternative to archive.is
Works where archive.is is blocked
Text-only, no DDoS directed at blog
view-source:https://www.nytimes.com/2026/05/09/business/dealbook/ai-notetakers-legal-risk.html
Save as 1.htm
Something like egrep -o "(\"text\":\"[^\"]+)|(\"textAlign\":\"LEFT\")|(\"url\":\"[^\"]+)|(\"__typename\":\"TextInline\")" 1.htm \
|sed '/\"url\":\"/{s/??.*//;s/$/\">/;s/.\{7\}/<a href=\"/;};
/\"__typename\":\"TextInline\"/{s/\"$/<\/a>/;s/.\{24\}//;};
s/\"textAlign\":\"LEFT\"/<p>/g;/\"text\":\"/s/.\{8\}//' \
|sed '1s/^/<meta charset=utf-8><meta name=viewport content=width=device-width>/' > 2.htm
rm 1.htm
firefox ./2.htm
NB. Javascript and CSS interpreters are needed only for Datadome challenge. The following DNS data, e.g., A RRs, are required ct.captcha-delivery.com
geo.captcha-delivery.com
www.nytimes.com
g1.nyt.com
No other DNS data is requireda day ago
Cool!
Would you be willing to license this code as GPL-3.0-or-later, or some other free license? I'd like to include a JavaScript derivative of this for Haketilo (a userscript manager). I would add it to a collection of scripts that aim to replace proprietary JavaScript here: https://codeberg.org/JacobK/unfinished-site-fixes/
19 hours ago
https://edition.cnn.com/2022/06/09/us/podfasters-audio-accel...
Apparently not everyone listens to audio at the same speed
a day ago
you forgot to mention it requires 13 minutes of audio listening
a day ago
There's an AI to do that for you
19 hours ago
If I was a lawyer I would worry about auto note takers too. So easy to view something out of context as something totally different from what actually happened. Just my 2 cents
a day ago
a day ago
There's some irony how, in an article like this, the phrase "recorders that use A.I. to log live interactions have become a product category." is linked to an article on the best AI note takers...
a day ago
Attorney here. I'm quite concerned that AI note-taking applications, if used by clients to keep track of conversations and meetings that would otherwise be privileged, might be jeopardizing their rights by doing so. I certainly have been advising clients myself, at least if I know them to be using AI for productivity or otherwise, not to use note-taking or chat tools during calls or meetings, or to discuss anything concerning legal matters with any AI chatbot or agent or tool, because it is all potentially discoverable under the rapidly evolving case law in this area.
Although I definitely think that any alternative approach would be fraught with legal peril, I strongly disagree that this SHOULD be the state of the law. AI note-keeping tools, chatbots, and other AI-generated services are not sentient beings, but most importantly, they are not natural or even artificial persons. The whole principle of waiver in the area of privilege is based on the notion that an otherwise private attorney-client communication, or document created that is covered by the attorney work-product doctrine, has been copied to or shared with a THIRD PARTY. A third party is a party, which at minimum is a legal or natural person -- perhaps a corporation or LLC, but not a computer, dolphin, chimpanzee, or chair. AI note-keeping tools, models, chatbots, etc., are obviously not natural persons (human beings), but they are also not even artificial persons. They cannot sue or be sued, own property, enter judgments or be held liable, or have any legally enforceable obligations. Legally, chatbots have no "standing" or personhood, even of the artificial sort assigned to corporations and LLCs (which, although not human, can sue or be sued, own property, obtain judgments, have legally enforceable obligations, etc.). There simply is no logical theory of waiver due to copying a third party that gets triggered by "conversing" with a chatbot.
The stronger argument I have seen, which Judge Rakoff cited about 6 weeks ago in an SDNY ruling, and that perhaps makes more sense (at least on its face), is to point to the ChatGPT or Claude Terms of Service. Those Terms make the contents of any chat histories between users and the AI service capable of being copied and utilized for training or other purposes. However, those terms of service are also quite similar to the same provisions often found in email and SMS text message providers, and for Zoom, Teams, WhatsApp, and plenty of other channels used by attorneys to communicate with clients. I haven't had the opportunity yet to contrast them, but I would be surprised if the software products routinely used to facilitate attorney-client conversations don't contain substantively similar if not identical provisions to the ones that Judge Rakoff found persuasive to deem privilege waived with respect to client-ChatGPT conversations. I've been trying cases for nearly 20 years across multiple jurisdictions and have never even seen anyone argue, at least not since the dawn of the email era at the very beginning of my career, that attorneys and clients who share privileged communications via email have waived the privilege because of Outlook's or Gmail's terms of service that say that the service can train on the contents of the emails for whatever reasons. In fact, I do recall that argument being made a long time ago, and I can say that it has been squarely rejected out of hand in every jurisdiction and court I have ever appeared. I don't know anyone who would even make such an argument today. (I'll distinguish the different case of an employee suing their employer but using the employer-issued email account to communicate with outside counsel about their employment claims; that scenario really is a waiver because the employee's contract with the employer typically includes a provision that the emails are owned by the employer and may be reviewed by them, which is very different than having an automated Gmail or Outlook script processing metadata or even data from massive numbers of emails.) In every jurisdiction I have ever appeared, the waiver of privilege only arises from copying a third party, not from using email, or text, or Teams, or Zoom, to communicate with a client in a manner that otherwise would be considered privileged but for the medium of communication. It is possible that under particular terms of service, a different result might be warranted, such as if the model also includes terms that say the engineers might read the actual contents of chat histories, but otherwise, the OpenAI or Claude Terms of Service seem like an awfully thin reed upon which to stack the entire weight of this theory of waiver.
This is not legal advice, and no attorney-client relationship is formed; I'm just stating my opinion while indicating that this is not the way I think the law should be headed.
a day ago
Australian lawyer here and fully agree - this issue has been mentioned a few times in some recent Family Court cases (where self-representation and thus AI usage is more prevalent) but there hasn't been any direct decisions on it as there has been in the US. I would like to think that the same line of thinking would prevail here (i.e. that just because it could theoretically be read by someone else doesn't necessarily result in a waiver of privilege).
a day ago
I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
a day ago
Surprised healthcare isn't called out specifically. AI note takers have exploded in popularity in the US.
a day ago
Stringer Bell would be furious.
a day ago
unrealted to the article, but how do you make a page that that prevents the mouse scroll wheel from working? that's pretty impressive.
a day ago
It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
a day ago
We turn off the AI note taker for the legal liabilities discussed in the NYT article. But we've also found its accuracy bad enough to warrant kicking it out, independent of legal issues. We have simultaneous participants from USA, India, France, Israel, and Estonia. Some have heavy accents (English is the meeting language). No problem for the humans, but the AI just can't cope (yet).
a day ago
Many employers prohobit them by policy.
a day ago
It’s hard when you’re no longer in control. Thanks AI!
a day ago
Paywall: can anyone share what the issue is?
Inaccuracy in meeting minutes?
Leaking private info, re security of notes?
I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.
a day ago
Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
a day ago
The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
a day ago
They are saying that it could invalidate attorney client privilege because the transcription could technically be available to an outside party.
I suspect what isn't being said by the lawyers is they want to keep attorney client privilege so they can outright lie.
6 hours ago
>they want to keep attorney client privilege so they can outright lie
As a trial attorney for over 40 years, that is an incredibly offensive take. Attorney/client privilege is usually litigation related and a prime example of the nature of conversations involve our advising our client of the prospects of prevailing at trial and whether to engage in settlement discussions with money amounts involved. If some day you are sued and you have a conversation with your lawyer about your financial worth as well as how much you are willing to pay to the person suing you - and that information ends up being turned over to the person suing you - you won't be so snide about the importance of attorney client privilege.
a day ago
It's in the viewable text on the page.
> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.
By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.
I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.
[0] https://perkinscoie.com/insights/update/federal-court-rules-...
[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...
[2] https://natlawreview.com/article/when-ai-takes-notes-protect...
[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...
a day ago
> It's in the viewable text on the page.
Not for me - there was no viewable text.
a day ago
People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.
a day ago
the doofus lawyer probably didn't realise, i wouldn't call it opt in
a day ago
If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.
To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.
a day ago
Frankly, at the very least any entity that receives any kind of public money and/or benefits, including any and all corporations, should be required as part of their receipt of public money or benefits of any kind, including legal protections, i.e., limited liability, to record ever single conversation anyone has in the executive level where all the horrible decisions are made, the plundering is done, and the current de facto immunity from prosecution for crimes is enjoyed.
If you receive even just limited legal liability protections ... you are receiving a public benefit and you should be required to record every single conversation, including if you involve your personal devices, every single personal conversation you had on your personal devices of any kind, short of reporting accidental contamination.
It seems people are way too ok with all the abuse and criminality of our politicians and executives at all levels and accountability really needs to be reintroduced into the system. Or are we simply going to wait for a coulee more Luigi events and the ruling class then just dropping the mask on the prison system surveillance state they have constructed around even the USA?
But I realize that is probably wishful thinking because it seems we long crossed the threshold of accountability, where the citizens had enough power to actually affect a requirement that all meetings and communications of any and all politicians, bureaucrats, and executives be recorded and even made public in most cases.