VladVladikoff
4 days ago
Considering the fact that very few people exit from AI searches into the web, rather than just ending the session (having received the answer they were looking for); it seems to me that this report would vastly overstate traditional search engine market share. Personally I’ve basically stopped using Google as my primary search. I usually start by searching in an LLM. Especially if the query is complex (e.g. give me a summary of USAs current lunar missions and progress towards a lunar base.) The only time I still go to google is for maps related searches. To find local businesses. But often in that case I will go directly to maps.google.com. I would like to see a real report on market share. I expect Google has lost a lot and hasn’t yet admitted it.
highwaylights
4 days ago
If you go Google something right now you’re not doing a web search like you were even a year ago - the first thing that comes up (and takes up most of the screen depending on your device) is a Gemini response to your query.
At the least it can be inferred that Google has fundamentally changed their main product to mimic a competitor, which is something you just don’t do if everything’s OK.
chaos_emergent
4 days ago
knowledge cards at the top of Google results have been around for at least 12 years, I'd interpret the LLM-based responses as an iteration of a feature that's been around for a while rather than mimicking a competitor.
disgruntledphd2
4 days ago
> At the least it can be inferred that Google has fundamentally changed their main product to mimic a competitor, which is something you just don’t do if everything’s OK.
I mean, the big thing that has changed is that investors are all in on AI, and Google looked like they were behind in this area, so they put it front and center so that they can talk nonsense about it on investor calls.
idle_zealot
4 days ago
> Especially if the query is complex (e.g. give me a summary of USAs current lunar missions and progress towards a lunar base.)
This terrifies me. The number of ostensibly smart, curious people who now fill their knowledge gaps with pseudorandom information from LLMs that's accurate just often enough to lower mental guards. I'm not an idiot; I know most people never did the whole "check and corroborate multiple sources" thing. What actually happened in the average case was that a person delegated trust to a few parties who, in their view, aligned with their perspective. Still, that sounds infinitely preferable to "whatever OpenAI/Google/whoever's computer says is probably right". When people steelman using LLMs for knowledge gathering, they like to position it as a first step to break in on a topic, learn what there is to learn, which can then be followed by more specific research that uses actual sources. I posit that the portion of AI users actually following up that way is vanishingly small, smaller even than the portion of people who read multiple news sources and research the credibility of the publications.
I value easy access to information very highly, but it seems like when people vote with their feet, eyes, and wallets that's not what you get. You get fast and easy, but totally unreliable information. The information landscape has never been great, but it seems to only get worse with each paradigm shift. I struggle to even imagine a hypothetical world where reliable information is easy to access. How do you scale that? How do you make it robust to attack or decay? Maybe the closest thing we have now is Wikipedia, is there something there that could be applied more broadly?
VladVladikoff
4 days ago
For a brief overview on a topic the accuracy is good enough. It might get some minor details wrong but they are generally superfluous to the topic, it typically breaks down when you are really getting into the weeds of a topic, or really niche subjects, at which point you have exceeded the utility of the LLMs. I have read many blog posts linked off 1st ranking position google queries in the past and found their answers to have inaccuracies as well, how is that better?
idle_zealot
3 days ago
> I have read many blog posts linked off 1st ranking position google queries in the past and found their answers to have inaccuracies as well, how is that better?
It's roughly as bad, if you assume the same degree of trust in both scenarios. I don't make that assumption. I get the sense that people are more likely to trust the AI answer at the top of the search results page or handed back to them in a ChatGPT conversation than they are to totally buy a random blogger. If I'm wrong, then great.
nonfamous
4 days ago
It seems like the key metric missing from this report is volume of referrals, reported over time. Ideally, segmented to user-initiated web searches, to filter out things like searches generated via a Spotlight search in iOS.
I’d be very interested to see the trendline of user-initiated search over time.