
Our June 2025 release of the Valyu DeepSearch API brings three highly requested improvements designed to remove friction from your search and retrieval workflows. You’ll now spend less time tweaking parameters and more time getting the exact context you need.
- Understands Your Intent
We’ve supercharged our Named Entity Recognition model so it automatically picks out author names, paper/article titles, conferences and years. It is entirely intent driven so no special flags, no extra parameters- DeepSearch “gets” what you mean. - Search Inside Any Open-Access Article
Fetch, parse and search within an article all in one call. Just hand DeepSearch a URL to an open-access PDF and ask your question. It works seamlessly with arXiv, PubMed Central and similar sources. - Rich Images from Web Results
When your query returns web pages, DeepSearch now crawls and serves any embedded images alongside the text. Ideal for grabbing figures, diagrams or photos without extra work.
Smarter Search That Gets Your Intent
DeepSearch’s improved NER engine tags entities such as author names, paper titles, conference names and dates. We feed those signals into our ranking algorithm so your results automatically align with your intent.
For example, if you ask for:
DeepSearch spots “Andrew Ng,” “2012”, and “ICML” and elevates the exact paper you were looking for, as you can see there is no other extra params needed.
Search Inside Any Open Article
Found and article but want your agent/app to dive into a specific open-access article? Just pass its URL to DeepSearch and ask your follow-up question.
It works with open-access sources like arXiv and PubMed Central. There’s no scraping or downloading involved. Just clean, agent-ready search.
Quick demo:
Response:
The image returned:
That’s one line to pull out the relevant section of “Scaling Laws for Neural Language Models” for the scaling law images.
Rich Images for Web Search Results
Now, when your search taps into web pages, DeepSearch will extract any embedded images and include their URLs in the result payload. Perfect for grabbing infographics, diagrams or photos.
You get both the text and any visuals in one go, so your agents can assemble richer, more context-aware responses.
How to use these features?
Our design philosophy for the DeepSearch API has always been to make it agent-native. As such, we’ve made it super simple for agents in tool-calling workflows to interact with these features and get the context they need. For example, a research agent might:
- Make a high-level query such as “research on agentic search-enhanced large reasoning models”
- Using the response returned from the api, the agent can dive deep into the papers returned by passing the arXiv URL in the included_sources field:
How to upgrade
There’s nothing you need to change in your code or settings. These enhancements live in our backend and apply automatically for every existing API key.
Why we built it
We know AI agents live and die by the quality of their retrieval. With smarter intent detection, in-paper search and built-in image support, your agents spend less time retrying queries and more time delivering precise, actionable insights.
Give it a try or explore our docs. We’re excited to see what you’ll build with the next generation of DeepSearch. If you want us to help you index a specific source of information, reach out at founders@valyu.network