AI assisted search-based research works now

283 pointsposted 3 months ago
by simonw

105 Comments

CSMastermind

3 months ago

The various deep research products don't work well for me. For example I asked these tools yesterday, "How many unique NFL players were on the roster for at least one regular season game during the 2024 season? I'd like the specific number not a general estimate."

I as a human know how to find this information. The game day rosters for many NFL teams are available on many sites. It would be tedious but possible for me to find this number. It might take an hour of my time.

But despite this being a relatively easy research task all of the deep research tools I tried (OpenAI, Google, and Perplexity) completely failed and just gave me a general estimate.

Based on this article I tried that search just using o3 without deep research and it still failed miserably.

simonw

3 months ago

That is an excellent prompt to tuck away in your back pocket and try again future iterations of this technology. It's going to be an interesting milestone when or if any of these systems get good enough at comprehensive research to provide a correct answer.

minraws

3 months ago

If you keep the prompt the same at some point the data will appear in training set and we might have answer.

So even though today it might be a good check it might not remain as such a good benchmark.

I think we need a way to keep updating prompts without increasing complexity in someway to properly verify model improvements. ARC Deep Research anyone?

wontonaroo

3 months ago

I used Google AI Studio instead of Google Gemini App because it provides references to the search results.

Google AI Studio gave me an exact answer of 2227 as a possible answer and linked to these comments because there is a comment further down which claims that is the exact answer. The comment was 2 hours old when I did the prompt.

It also provided a code example of how to find it using the python nfl data library mentioned in one of the comments here.

patapong

3 months ago

So the time to test data leakage from posting a question and answer to the internet, to LLMs having access to the answer is less than 2h... Does not bode well for the benchmarks of the future!

gilbetron

3 months ago

To avoid "result corruption" I asked a similar question, but for NBA players, and used O4-mini, and got a specific answer:

"For the 2023‑24 NBA regular season (which ran from October 24, 2023 to April 14, 2024), a total of 561 distinct players logged at least one game appearance, as indexed by their “Rk” on the Basketball‑Reference “Player Stats: Totals” page (the final rank shown is 561)"

Doing a quick search on my own, this number seems like it could be correct.

user

3 months ago

[deleted]

neom

3 months ago

Is it accurate that there are 544 rosters? If so, even at 2 minutes a roster isn't that days of work, even if you coded something? How would you go about completing this task in 1 hour as a human? (also chatgpt 4.1 gave me 2,503 and it said it used the NFL 2024 fact book)

CSMastermind

3 months ago

544 rosters but half as many games (because the teams play each other).

Technically I can probably do it in about 10 minutes because I've worked with these kind of stats before and know about packages that will get you this basically instantly (https://pypi.org/project/nfl-data-py/).

It's exactly 4 lines of code to find the correct answer, which is 2,227.

Assuming I didn't know about that package though I'd open a site like pro football reference up, middle click on each game to open the page in a new tab, click through the tabs, copy paste the rosters into sublime text, do some regex to get the names one per line, drop the new one per line list into sortmylist or a similar utility, dedupe it, and then paste it back into sublime text to get the line count.

That would probably take me about an hour.

dghlsakjg

3 months ago

If the rosters are in some sort of pretty easily parsed or scrapable format from the nfl, as sports stats typically are, this is just a matter of finding every unique name. This is something that I imagine would take less than an hour or two for a very beginner coder, and maybe a second or two for the code to actually run

raybb

3 months ago

Similarly, I asked it a rather simple question of giving me a list of AC repair places near me with my numbers. Weirdly, Gemini repeated a bunch of them like 3 or 4 times, gave some completely wrong phone numbers, and found many places hours away but labeled them as in the neighboring city.

paulsutter

3 months ago

I bet these models could create a python program that does this

Retric

3 months ago

Maybe eventually, but I bet it’s not going to work with less than 30 minutes of effort on your part.

If “It might take an hour of my time.” to get the correct answer then there’s a lower bond for trying a shortcut that might not work.

danielmarkbruce

3 months ago

This is just a bad match to the capabilities. What you are actually looking for is analysis, similar in nature to what a data scientist may do.

The deep research capabilities are much better suited to more qualitative research / aggregation.

pton_xd

3 months ago

> The deep research capabilities are much better suited to more qualitative research / aggregation.

Unfortunately sentiment analysis like "Tell me how you feel about how many players the NFL has" is just way less useful than: "Tell me how many players the NFL has."

southernplaces7

3 months ago

Your logic is.... strange...

Because it failed miserably at a very simple task of looking through some scattered charts, the human asking should blame themselves for this basic failure and trust it to do better with much harder and more specialized tasks?

lucyjojo

3 months ago

First person that makes a good exact aggregation AI will make so much money...

Precise aggregation is what so many juniors do in so many fields of work it's not even funny...

johnnyanmac

3 months ago

If AI Can't look up and read a chart, why would I trust it with any real aggregation?

oytis

3 months ago

So it's not doing well in things that we can verify/measure, but sure it's doing much better in things we can't measure - except we can't measure them, so we have no idea about how well it is doing actually. The most impressive feature of LLMs stays its ability to impress.

user

3 months ago

[deleted]

kenjackson

3 months ago

o3 deep research gave me an answer after I requested an exact answer again (it gave me an estimate first): 2147.

simonw

3 months ago

I think it's important to keep tabs on things that LLM systems fail at (or don't do well enough on) and try to notice when their performance rises above that bar.

Gemini 2.5 Pro and o3/o4-mini seem to have crossed a threshold for a bunch of things (at least for me) in the last few weeks.

Tasteful, effective use of the search tool for o3/o4-mini is one of those. Being able to "reason" effectively over long context inputs (particularly useful for understanding and debugging larger volumes of code) is another.

skydhash

3 months ago

One issue I can find with this workflow is tunnel vision, making ill informed decision because of the lack of surrounding information. I often skim books because even if I don't retain the content, I can have a mental map that can help me find further information when I need them. I wouldn't try to construct a complete answer to a question with just this amount of information, but I will use that map to quickly locate the source and have more information to synthesize the answer.

One could use the above workflow in the same way and argues that natural language search is more intuitive than keyword based search. But I don't think that brings any meaningful productivity improvement.

> Being able to "reason" effectively over long context inputs (particularly useful for understanding and debugging larger volumes of code) is another.

Any time I saw this "wish" pop up, my suggestion is to try a disassembler to reverse engineer some binary to really understand the problem of coming up with a theory of a program (based on Naur's definition). Individual statements are always clear (programming language are formal and have no ambiguity). The issue is grouping them, unambiguously define the semantic of these groups, and find the links between them, recursively.

Once that's done, what you'll have is a domain. And you could have skipped the whole thing by just learning the domain from a domain expert. So the only reason to do this is because the code doesn't really implement the domain (bugs) or it's hidden purposefully. So the most productive workflow there is to learn the domain first to find discrepancy (first case) or focus yourself on the missing part (second case). In the first case, the easiest approach is writing tests, and the more complete one is to do a formal verification of the software.

otistravel

3 months ago

The most impressive demos of these tools always involve technical tasks where the user already knows enough to verify accuracy. But for the average person asking about health issues, legal questions, or historical facts? It's basically fancy snake oil - confident-sounding BS that people can't verify. The real breakthrough would be systems that are actually trustworthy without human verification, not slightly better BS generators. True AI research breakthroughs would admit uncertainty and provide citations for everything, not fake certainty like these tools do.

spongebobstoes

3 months ago

this remains true for pretty much all advice or information we receive. doctors, lawyers, accountants, teachers. there have been countless times that all of these professionals have given me bad advice or information

sure, at least I have someone to blame in that case. but in my experience, the AI is at least as reliable as a person who I don't personally know

neural_thing

3 months ago

I tested o3 on a medical issue I've had that 50+ doctors couldn't diagnose over the span of 6-7 years, ended up figuring it out through sheer luck. With my first prompt, it gave a list of probabilities, with the correct answer being listed as the third most likely. It also suggested correct tests to run for every option. I trust it way more than I trust human doctors who were confidently wrong about me for years.

FieryTransition

3 months ago

Plenty studies show that these models are better at catching and diagnosing than even a board of doctors are. Doctors are good at other things, and I hope the future will allow doctors to use these models together with their practice.

The problem is when the ai makes a catastrophic prediction, and the layman can't see it.

otabdeveloper4

3 months ago

> but in my experience, the AI is at least as reliable as a person who I don't personally know

How do you know this?

sshine

3 months ago

The article doesn’t mention Kagi: The Assistant, a search-powered LLM frontend that came out of closed beta around the beginning of the year, and got included in all paid plans since yesterday.

It really is a game changer when the search engine

I find that an AI performing multiple searches on variations of keywords, and aggregating the top results across keywords is more extensive than most people, myself included, would do.

I had luck once asking what its search queries were. It usually provides the references.

simonw

3 months ago

I haven't tried Kagi's product here yet. Do you know which LLM it uses under the hood?

Edit: from https://help.kagi.com/kagi/ai/assistant.html it looks like the answer is "all of them":

> Access to the latest and most performant large language models from OpenAI, Anthropic, Meta, Google, Mistral, Amazon, Alibaba and DeepSeek

dcre

3 months ago

Yep, regular paid Kagi sub comes with cheap models for free: GPT-4o-mini, Gemini 2.5 Flash, etc. If you pay extra you can get the SOTA models, though IMO flash is good enough for most stuff if the search result context is good.

jsemrau

3 months ago

My main observation here is

1. Technically it might be possible to search the Internet, but it might not surface correct and/or useful information.

2. High-value information that would make a research report valuable is rarely public nor free. This holds especially true in capital-intensive or regulated industries.

simonw

3 months ago

I fully expect one of the AI-related business models going forward to be charging subscriptions for LLM search tool access to those kinds of archives.

ChatGPT plus an extra $30/month for search access to a specific archive would make sense to me.

sshine

3 months ago

Kagi is $10/mo. for search and +$15/mo. for premium LLMs with agentic access to search.

jsemrau

3 months ago

Then I'd rather see domain-specific agent-first data. I.e., not a simple API call but token->BM25->token

hadlock

3 months ago

o3/o4 seem to know how to search things like pypi, crates.io, pkg.go.dev etc and apply those changes on the first try. My application (running on an older version of code) had a breaking change to how the event controller functioned in the newer version, o3 looked at the documentation and rewrote it to use the new event controller. It used to be that you were trapped with the LLM being 3-8 months behind on package versions.

simonw

3 months ago

Huh, now I'm thinking that maybe a target for release notes should be to provide enough details that a good LLM can be used to apply fixes for any breaking changes.

rd

3 months ago

MCP maybe? A release notes MCP (maybe into ReadTheDocs or pypi) that understands upgrade instructions for every package.

sanderjd

3 months ago

This is the thing I don't really love about MCP: Why should it require a separate protocol, rather than just good readable documentation?

navinsylvester

3 months ago

sitkack

3 months ago

This is devdocs to be consumed by LLMs, https://github.com/upstash/context7

Brilliant (and one less think I don't have to build)!

sanderjd

3 months ago

I'm glad this exists, but can you describe to me why it needs to? Why can't agents just read the docs directly?

sitkack

3 months ago

By returning detailed docs for exactly what the AI is coding at the time, it greatly reduces the the likelihood it will make a mistake. It moves from a recall from the training data problem, to a transcription problem.

This is RAG but for API docs.

TrackerFF

3 months ago

I'm not a researcher, but don't most researchers these days also upload their work to arXiv?

Sure, it's not a journal - but in some fields (Machine Learning, Math) it seems like everyone also uploads their stuff there. So if the models can crawls sites like arXiv, at least there's some decent stuff to be found.

jsemrau

3 months ago

Proper research, especially the one contributed to conferences, is hard to get by and is usually managed by the conference organizers. Arxiv has some, but it's limited.

It would be great if for a DeepSearch tool for ML, I could just use Arxiv as a source and have the Agent search this. But so far I have not found a working Arxiv tool that does this well.

levocardia

3 months ago

Not outside of ML, physics, and math. Preprints are extremely rare in many (dare I say most) scientific fields, and of course many times you are interested in not the cutting edge work, but the foundational work in a field from the 60s, 70s, or 80s, all of which is locked behind a paywall. Or at least it's supposed to be, and corporate LLMs are not "allowed" to go poking around on sketchy Russian website for non-paywalled versions.

qingcharles

2 months ago

I'm assuming many AI companies are probably scraping all the PDFs from the "shadow libraries" of the world that have done some of the work of liberating these papers from behind their paywalls. Obviously it's legally unsettled territory right now...

intended

3 months ago

I find that these conversations on HN end up covering similar positions constantly.

I believe that most positions are resolved if

1) you accept that these are fundamentally narrative tools. They build stories, In whatever style you wish. Stories of code, stories of project reports. Stories or conversations.

2) this is balanced by the idea that the core of everything in our shared information economy is Verification.

The reason experts get use out of these tools, is because they can verify when the output is close enough to be indistinguishable from expert effort.

Domain experts also do another level of verification (hopefully) which is to check if the generated content computes correctly as a result - based on their mental model of their domain.

I would predict that that LLMs are deadly in the hands of people who can’t gauge the output, and will end up driving themselves off of a cliff, while experts will be able to use it effectively on tasks where verification of the output has a comparative effort advantage, over the task of creating the output.

gh0stcat

3 months ago

You've perfectly captured my experience as well, I typically only trust it and have good experiences with LLMs when I have enough domain expertise to get to at least a 95% confidence the output is correct. (Specific to my domain of work, I don't always need "perfect"). I also can mostly use it as a first pass for getting the idea of where to begin research, after that I lose confidence that the more detailed and advanced content it is giving me is accurate. There is a gray area though where a domain expert might have a false sense of confidence, and over time experience "Skill Drift", where they lose expertise because they are only ever verifying a lossy compression of information, rather than re-setting their context with real world information. I am mostly concerned with that last bit.

ilrwbwrkhv

3 months ago

Yup succinct summarization of the current state. This works across domains from research to software engineering.

user

3 months ago

[deleted]

saulpw

3 months ago

I tried it recently. I asked for videochat services like the one I use (WB) with 2 specific features that the most commonly used services don't have. It asked some clarifying questions and seemed to understand the mission, then went off for 10 minutes after which it returned 5 results in a table.

The first result was WB, which I gave to it as the first example and am already using. Results 2 and 3 were the mainstream services which it helpfully marked in the table as not having the features I need. Result 4 looked promising but was discontinued 3 years ago. Result 5 was an actual option which I'm trying out (but may not work for other reasons).

So, 1/5 usable results. That was mildly helpful I guess, but it appeared a lot more helpful on the surface than it was. And I don't seem to have the ability to say "nice try but dig deeper".

Gracana

3 months ago

You can tell it to try again. It took me a couple rounds with the tool before I noticed that your conversation after the initial research isn't limited to just chatting: if you select the "deep research" button on your message, it will run the search process in its response.

simonw

3 months ago

That sounds like a Deep Research query, was that with OpenAI or Gemini?

saulpw

3 months ago

This was OpenAI.

jeffbee

3 months ago

The Deep Research stuff is crazy good. It solves the issue that I can often no longer find articles that I know are out there. Example: yesterday I was holding forth on the socials about how 25 years ago my local government did such and such thing to screw up an apartment development at the site of an old movie theater, but I couldn't think of the names of any of the principals. After Googling for a bit I used a Deep Research bot to chase it down for me, and while it was doing that I made a sandwich. When I came back it had compiled a bunch of contemporaneous news articles from really obscure bloggers, plus allusions to public records it couldn't access but was confident existed, that I later found using the URLs and suggested search texts.

user

3 months ago

[deleted]

btbuildem

3 months ago

It's a relevant question about the economic model for the web. On one hand, the replacement of search with a LLM-based approach threatens the existing, advertising-based model. On the other hand, the advertising model has produced so much harm: literally irreparable damage to attention spans, outrage-driven "engagement", and the general enshittification of the internet to mention just a few. I find it a bit hard to imagine whatever succeeds it will be worse for us collectively.

My question is, how to reproduce this level of functionality locally, in a "home lab" type setting. I fully expect the various AI companies to follow the exact same business model as any other VC-funded tech outfit: free service (you're the product) -> paid service (you're still the product) -> paid service with advertising baked in (now you're unabashedly the product).

I fear that with LLM-based offerings, the advertising will be increasingly inseparable, and eventually undetectable, from the actual useful information we seek. I'd like to get a "clean" capsule of the world's compendium of knowledge with this amazing ability to self-reason, before it's truly corrupted.

fzzzy

3 months ago

You need a copy of r1 and enough ram to run it, and a web searching tool, or a rag database with your personal data store.

btbuildem

3 months ago

R1 would be the reasoning model - as in, the initial part of the output being the "train of thought" revealed before the "final answer" is provided. I was able to deploy a heavily quantized version of that locally, and run it with RAG (Open Webui in this instance) -- with web search enabled, sure, but it's still a far cry from an actual "research" model that know when and how to seek extra data / information.

Tycho

3 months ago

I tried o3 for a couple of things.

First one, geolocation a photo I saw in a museum. It didn’t find a definitive answer but it sure turned up a lot of fascinating info in its research.

Second one, I asked it to suggest a new line of enquiry in the Madeleine McCann missing person case. It made the interesting suggestion that the 30 minute phone call the suspect made on the evening of the disappearance, from a place near the location of the abduction, was actually a sort of “lookout call” to an accomplice nearby.

Quite impressed. This is a great investigative tool.

xp84

3 months ago

From article:

> “Google is still showing slop for Encanto 2!” (Link is provided)

I believe quite strongly that Google is making a serious misstep in this area, the “supposed answer text pinned at the top above the actual search results.”

For years they showed something in this area which was directly quoted from what I assume was a shortlist of non-BS sites so users were conditioned for years that if they just wanted a simple answer like when a certain movie came out or if a certain show had been canceled or something, you may as well trust it.

Now it seems like they have given over that previous real estate to a far less reliable feature, which simply feeds any old garbage it finds anywhere into a credulous LLM and takes whatever pops out. 90% of people that I witness using Google today simply read that text and never click any results.

As a result, Google is now pretty much always even less accurate at the job of answering questions than if you posed that same question to ChatGPT, because GPT seems to be drawing from its overall weights which tend toward basic reality, whereas Google’s “Answer” seems to be summarizing a random 1-5 articles from the Spam Web, with zero discrimination between fact, satire, fiction, and propaganda. How can they keep doing this and not expect it to go badly?

ljsprague

3 months ago

I have stopped using Google when I have a random fact I need answered. Faster to ask ChatGPT. I trust it enough now.

softwaredoug

3 months ago

I wonder when Google search will let me "chat" with the search results. I often want to ask the AI Overview follow up questions.

I secondarily wonder how an LLM solves the trust problem in Web search. What's traditionally solved (and now gamed) through PageRank. It doesn't seem ChatGPT is easily fooled by Spam as direct search.

How much is Bing (or whatever the search engine is) getting better? vs how much are LLMs better at knowing what a good result is for a query?

Or perhaps it has to do with the richer questions that get asked to chat vs search?

dingnuts

3 months ago

>I wonder when Google search will let me "chat" with the search results

Kagi has this already, it's great. Choose a result, click the three-dot menu, choose "Ask questions about this page." I love to do this with hosted man pages to discover ways to combine the available flags (and to discover what is there)

I find most code LLMs write to be subpar but Kagi can definitely write a better ffmpeg line than I can when I use this approach

vunderba

3 months ago

> I wonder when Google search will let me "chat" with the search results.

You don't hear a lot of buzz around them, but thats kind of what Perplexity lets you do. (Possibly phind too but it's been a while since I used them).

KTibow

3 months ago

When AI Overview was called Search Generative Experience, you could do that. You can do that again now if you have access to AI Mode.

csallen

3 months ago

It's actually quite doable to build your own deep research agent. You just need a single prompt, a solid code loop to run it agentically, and some tools for it to call. I've been building a domain-specific deep research agent over the past few days for internal use, and I'm pretty impressed with how much better it is than any of the official deep search agents for my use case.

mz00

3 months ago

same here. I first built the agentic workflow in python and later nextjs. uses dozens of llm apis. works well, and am also impressed with the results.

63

3 months ago

One downside I found is that the llm cannot change its initial prompt until it's done thinking. I used deep research to compare counseling centers for me but of course when it encounters some factor I hadn't thought of (e.g. the counselors here fit the criteria perfectly but none accept my insurance), it doesn't know that it ought to skip that site entirely. Really this is a critique of the deep-research approach rather than search in general, but I imagine it can still play out on smaller scales. Often, searching for information is a dynamic process involving the discovery of unknown unknowns and adjustment based on that, but ai isn't great at abstract goals or stopping to ask clarifying questions before resuming. Ultimately, the report I got wasn't useless, but it mostly just regurgitated the top 3 google results. I got much better recommendations by reaching out to a friend who works in the field.

blackhaz

3 months ago

This is surprising. o3 produces incredible amount of hallucinations for me, and there are lots of reddit threads about it. I've had to roll back to another model because it just swamps everything in made up facts. But sometimes it is frighteningly smart. Reading its output sometimes feels like I'm missing IQ points.

baq

3 months ago

> I can feel my usage of Google search taking a nosedive already.

Conveniently Gemini is the best frontier model for everything else, they’re very interested and well positioned (if not best?) to also be the best in deep research. Let’s check back in 3-6 months.

jillesvangurp

3 months ago

Google has two advantages:

1) Their AI models aren't half bad. Gemini 2.5 seems to be doing quite well relative to some competitors.

2) They know how to scale this stuff. They have their own hardware, lots of data, etc.

Scaling is of course the hard part. Doing things at Google scale means doing it well while still making a profit. Most AI companies are just converting VC cash into GPUs and energy. VC subsidized AI is nice at a small scale but cripplingly expensive at a larger scale. Google can't do this; they are too large for that. But they are vertically integrated, build their own data centers, with their own TPUs, etc. So, once this starts happening at their scale, they might just have an advantage.

A lot of what we are seeing is them learning to walk before they start running faster. Most of the world has no clue what perlexity is or any notion of the pros and cons of claude 3.7 sonnet vs. o4 mini high. None of that stuff matters long term. What matters is who can do this stuff well enough for billions of people.

So, I wouldn't count them out. But none of this stuff guarantees success either, of course.

throwup238

3 months ago

IMO they’re already the best. Not only is the rate limit much higher (20/day instead of OpenAI’s 10/month) but Gemini is capable of looking at far more sources, on the order of 10x.

I just had a research report last night that looked at 400 sources when I asked it to help identify a first edition Origin of Species (it did a great job too, correctly explaining how to identify a true first edition from chimeral ones).

sublimefire

3 months ago

I do prefer tools like GPT researcher where you are in control over sources and search engines. Sometimes you just need to use arxiv, sometimes mix research with the docs you have. Sometimes you want to use different models. I believe the future is in choosing what you need for the specific task at that moment, eg 3d model generation mixed with something else, and this all requires some sort of new “OS” level application to run from.

Individual model vendors cannot do such a product as they are biased towards their own model, they would not allow you to choose models from competitors.

energy123

3 months ago

  > The user-facing Google Gemini app can search too, but it doesn’t show me what it’s searching for. 
Gemini 2.5 Pro is also capable of search as part of its chain of thought but it needs light prodding to show URLs, but it'll do so and is good at it.

Unrelated point, but I'm going to keep saying this anywhere Google engineers may be reading, the main problem with Gemini is their horrendous web app riddled with 5 annoying bugs that I identified as a casual user after a week. I assume it's in such a bad state because they don't actually use the app and they use the API, but come on. You solved the hard problem of making the world's best overall model but are squandering it on the world's worst user interface.

loufe

3 months ago

There must be some form of memory leak in AI Studio as I'll have to close and open a new tab after about 2 hours as it slowly grinds my slower computers to a halt. Its ability to create a markdown file without escaping the markdown itself (included code snippets) is definitely my first suggestion for them to fix.

It's a great tool, but sometimes frustrating.

mehulashah

3 months ago

I find that people often conflate search with analytics when discussing Deep Research. Deep Research is iterated search and tool use. And, no doubt, it’s remarkably good. Deep Analytics is like Deep Research in that it uses generative AI models to generate a plan, but LLMs operations and structured (tool use) are interleaved in database style query pipelines. This allows for the more precise counting and exhaustive search type use cases.

jonas_b

3 months ago

A common google searching thing I counter have is something like this:

I need to get from A to B via C via public transport in a big metropolis.

Now C could be one of say 5 different locations of a bank branch, electronics retailer, blood test lab or whatever, so there's multiple ways of going about this.

I would like a chatbot solution that compares all the different options and lays them out ranked by time from A to B. Is this doable today?

pyfon

3 months ago

I am on holiday now and want something similar. Get me from A to B but with a memorable heuristic that I can use if I leave at any time. E.g. "if you catch a 134 or 175 bus to Kings station then get the metro 3 stops to Cental station.". Even better if you add some landmarks.

This may exclude some clever routes that shave off 3 minutes if you do the correct parkour... but it means I can now put my phone down and enjoy the journey without tracking it like a hawk.

swyx

3 months ago

> Deep Research, from three different vendors

dont forget Xai grok!

M4v3R

3 months ago

Which, at least in my experience is surprisingly good while being much faster than others.

fudged71

3 months ago

you.com is surprisingly good for this as well (I like the corporate report PDF export)

ilrwbwrkhv

3 months ago

Horrible compared to sota. I only find it being mentioned by random ai influencers who are a waste of air and who live on twitter.

gitroom

3 months ago

I feel like half the time these AI tools are either way too smart or just eating glue, tbh - do you think people will ever actually trust AI for deep answers or are we all just using it to pass time at work?

soulofmischief

3 months ago

Do you understand how search works? After finding something, you still have to verify it.

in_ab

3 months ago

Claude doesn't seem to have a built in search tool but I tried this with a MCP server to search google and it gives similar results.

Alifatisk

3 months ago

Have anyone tried ithy.com yet? If you have any prompt that you know most llms fail at, I’d love to know how itchy responds!

gcanyon

3 months ago

The concern over "LLMs vs. the Web" is giving me serious "The Web vs. Brick and Mortar" vibes. That's not to say that it won't be the predicted cataclysm, just that it might not be. Time will tell, because This Is Happening, People, but if it does turn out to be a serious problem, I think we'll find some way to adapt. We're unlikely to accept a lesser result.

Havoc

3 months ago

Are any of the Deep Research tools pure api cost? Or all monthly sub?

simonw

3 months ago

I think the Gemini one may still be available for free.

BambooBandit

3 months ago

I feel like the bigger problem isn't whether these deep research products work, but rather the raw material, so to speak, that they're working with.

For example, a lot of the "sources" cited in Google's AI Overview (notably not a deep research product) are not official, just sites that probably rank high in SEO. I want the original source, or a reliable source, not joeswebsite dot com (no offense to this website if it indeed exists).

simonw

3 months ago

Yes, Google's AI overviews are terrible. They're an example of how not to build this.

That'd what makes o3/o4-mini driven search notable to me: those models appear to have much better taste in which searches to run and which sources to consider.

qwertox

3 months ago

I feel like the benefit which AI gives us programmers is limited. They can be extremely advanced, accelerative and helpful assistants, but we're limited to just that: architecting and developing software.

Biologists, mathematicians, physicists, philosophers and the like seem to have an open-ended benefit from the research which AI is now starting to enable. I kind of envy them.

Unless one moves into AI research?

bluefirebrand

3 months ago

I don't think AI is trustworthy or accurate enough to be valuable for anyone trying to do real science

That doesn't mean they won't try though. I think the replication crisis has illustrated how many researchers actually care about correctness versus just publishing papers

parodysbird

3 months ago

Biologists, mathematicians, physicists, and philosophers are already the experts who produce the text in their domain that the LLMs might have been trained on...

twic

3 months ago

Until AI can work a micropipette, it's going to be of fairly marginal use to biologists.

oulipo

3 months ago

The main "real-world" use cases for AI use for now have been:

- shooting buildings in Gaza https://apnews.com/article/israel-palestinians-ai-weapons-43...

- compiling a list of information on Government workers in US https://www.msn.com/en-us/news/politics/elon-musk-s-doge-usi...

- creating a few losy music videos

I'd argue we'd be better off SLOWING DOWN with that shit

sandspar

3 months ago

You seem ideology motivated instead of truth motivated which makes you untrustworthy.

esafak

3 months ago

Programming is not real world?