Create "packages" of context for popular API & Libraries - make them freely available via public url
Keep the packages up to date. They'll be newer than the cutoff date for many models, and they'll be cleaner then the data a model slurps in using web search.
Voila, you're now the trusted repository of context for the developer community.
You'll get a lot of inbound traffic/leads, and ideas for tangential use cases with commercial potential.
I love this, it can be very useful to have ready to use library of context data and at the same time a perfect solution to bring in new users. Thanks so much.
To be free and not forward credentials I built an alternative to all-in-one chatting without auth and access to search API through the web:
https://llmcouncil.github.io/llmcouncil/
Provides simple interface to chat with Gemini, Claude, Grok, Openai, deepseek in parallel
I thought about building something along these lines (not the same but vaguely similar).
Then Gemini AI Studio came along with a 1 million token window and allowed me to upload zip files of my entire code base and I lost interest in my own thing.
Yes the long context it's complementary, in other chat services like Gemini you have to rewrite that base context everytime for each new fresh chat, plus they lack of specific data import tools and projects management
Easiest way to prove it can't handle the full context in reality is to upload a one hour documentary movie with audio and ask it to write timestamps of chapters/critical moments. It can't handle this beyond 10 minutes even remotely reliably.
An agentic flow can solve this within an existing UI/app; I already use such a workflow when I have to bring in project documentation. That will be your competition.
Since it's a commercial product and feedback can be useful: people would generally be hesitant to leave their existing apps if there's a workaround. There's a certain stickiness to them, even ChatGPT. Personally I use self-hosted LibreChat, and the history and additional features it provides are important to me.
What I can do is to make it very transparent on how data is managed.
The files content are appended to the context builder, then the context and messages are processed through OpenRouter, which is a provider that offers APIs to all the AI models, and the output generated (and the account data) is stored on a secured database on Mongodb platform.
Isn't NotebookLM already exactly web and file context (a "ContextChat")?
Edit: I assume it is basically a similar product, but your differentiators are mainly the customer getting to choose their model, and you getting to write your own context adding ergonomics (like adding links from a Sitemap)?
Exactly, similar plus tools to import and manage projects context fast (like GitHub private repos and sitemaps url), multiple ai model and pay per use like using APIs
I think it would be better if it was just context and not connected to any model. Think of one place where you can hook in your drive folder, GitHub, etc. and have it produce the best context for the task you want to achieve. Then users can copy that to their model or workflow of choice
Thank you, this could be a cool feature too add! For example the ability to click a link that redirects to other chat services with your project base context you built and optionally all the messages sent until there
For what I can see it doesn't offers the flexibility of importing content from a detailed sitemap or private GitHub repositories in a fast way (and more tools to come).
Then it doesn’t has the possibility to switch to different AI models plus you have to pay a monthly subscription.
Cool! I was excited when I saw this and signed up.
One key thing I was hoping for was a consistent resync with source material particularly google docs. Looks like I'll have to download then upload to your app whenever they change.
Yes this sounds very useful and productive. Having project context updated based on external events. I will share it on socials when ready, thanks for the feedback!
I tried to solve my own problems that I had while copy and pasting the same starting context from chat to chat. Now I can generate the base context and start new chats from there.
You and I are going to end up competing because im evovling my original solution in this space, https://github.com/backnotprop/prompt-tower ... best of luck, great execution thus far.
So now our jobs are shifting from doing work, to telling the AI to do work, so now we need management tools to better manage how we are telling the AI to do work.
Here's how I would market this:
Create "packages" of context for popular API & Libraries - make them freely available via public url
Keep the packages up to date. They'll be newer than the cutoff date for many models, and they'll be cleaner then the data a model slurps in using web search.
Voila, you're now the trusted repository of context for the developer community.
You'll get a lot of inbound traffic/leads, and ideas for tangential use cases with commercial potential.
I love this, it can be very useful to have ready to use library of context data and at the same time a perfect solution to bring in new users. Thanks so much.
Of all the suggestions here, this is the one.
wouldn't that be just 3rd party llms.txt?
To be free and not forward credentials I built an alternative to all-in-one chatting without auth and access to search API through the web: https://llmcouncil.github.io/llmcouncil/
Provides simple interface to chat with Gemini, Claude, Grok, Openai, deepseek in parallel
I thought about building something along these lines (not the same but vaguely similar).
Then Gemini AI Studio came along with a 1 million token window and allowed me to upload zip files of my entire code base and I lost interest in my own thing.
Yes the long context it's complementary, in other chat services like Gemini you have to rewrite that base context everytime for each new fresh chat, plus they lack of specific data import tools and projects management
It technically handles 1M tokens, but if you ask it questions it's obvious that it's too much to handle.
Just upload a novel and ask it questions, you'll see how it botches simple stuff
Easiest way to prove it can't handle the full context in reality is to upload a one hour documentary movie with audio and ask it to write timestamps of chapters/critical moments. It can't handle this beyond 10 minutes even remotely reliably.
Good or bad it's a zillion times better than Claude or ChatGPT where you can't even upload a zipfile.
If you want this in numbers check the nolima benchmark
An agentic flow can solve this within an existing UI/app; I already use such a workflow when I have to bring in project documentation. That will be your competition.
Since it's a commercial product and feedback can be useful: people would generally be hesitant to leave their existing apps if there's a workaround. There's a certain stickiness to them, even ChatGPT. Personally I use self-hosted LibreChat, and the history and additional features it provides are important to me.
I appreciate the feedback!
Yes I will work on make the context management more productive with a ready to use service and with abilities to switch from other services easily.
This is nice. I would love to be in a mailing list regarding updates
Thank you, you can follow products updates on https://x.com/MatteoRicupero where I put products updates more frequently, or here https://contextch.at/mailing-list if you prefer emails
How are you handling privacy / security / confidentiality if I upload all this data? No way I could use this for work.
Yes actually that it's not a trivial topic.
What I can do is to make it very transparent on how data is managed.
The files content are appended to the context builder, then the context and messages are processed through OpenRouter, which is a provider that offers APIs to all the AI models, and the output generated (and the account data) is stored on a secured database on Mongodb platform.
It’s all defined in the privacy policy here: https://contextch.at/docs/privacy-policy/index.html
Why do you care about "work" data? They're selling your data[1][2]. You should sell theirs, it's only fair.
[1] https://techpolicy.sanford.duke.edu/blogroll/fortune-500-com...
[2] https://techpolicy.sanford.duke.edu/blogroll/examining-data-...
Isn't NotebookLM already exactly web and file context (a "ContextChat")?
Edit: I assume it is basically a similar product, but your differentiators are mainly the customer getting to choose their model, and you getting to write your own context adding ergonomics (like adding links from a Sitemap)?
Exactly, similar plus tools to import and manage projects context fast (like GitHub private repos and sitemaps url), multiple ai model and pay per use like using APIs
Nice idea!
I think it would be better if it was just context and not connected to any model. Think of one place where you can hook in your drive folder, GitHub, etc. and have it produce the best context for the task you want to achieve. Then users can copy that to their model or workflow of choice
Thank you, this could be a cool feature too add! For example the ability to click a link that redirects to other chat services with your project base context you built and optionally all the messages sent until there
Compare to Claude Projects?
https://www.anthropic.com/news/projects
For what I can see it doesn't offers the flexibility of importing content from a detailed sitemap or private GitHub repositories in a fast way (and more tools to come).
Then it doesn’t has the possibility to switch to different AI models plus you have to pay a monthly subscription.
Cool! I was excited when I saw this and signed up.
One key thing I was hoping for was a consistent resync with source material particularly google docs. Looks like I'll have to download then upload to your app whenever they change.
Is that right? Auto syncing in the plan?
Auto syncing added to the plan!
Cool. One option is just to integrate with make/n8n/zapier so I could a) trigger on doc changes and then b) upload (and overwrite) the doc in your app
Yes this sounds very useful and productive. Having project context updated based on external events. I will share it on socials when ready, thanks for the feedback!
Sure! Email your users too - since I'm one of them I'll get the email. :)
Will do thanks again!
I tried to solve my own problems that I had while copy and pasting the same starting context from chat to chat. Now I can generate the base context and start new chats from there.
Yes def a needed thing for power users.
You and I are going to end up competing because im evovling my original solution in this space, https://github.com/backnotprop/prompt-tower ... best of luck, great execution thus far.
Thank you, will take a look at your software, competition is always good
Its UX looks similar to You.com
edit: whoops... commented on the wrong tab... nevermind, but Godspeed.
I appreciate it, thanks and keep building
Love this. I will give it a try. Beautiful landing page as well.
Thanks so much, if you try it out feel free to leave feedback if you want!
So now our jobs are shifting from doing work, to telling the AI to do work, so now we need management tools to better manage how we are telling the AI to do work.
I must have taken a turn to the wrong timeline.
Yeah. That's how it works with employees, too.
New tools for a new kind of work!
You give instructions as someone who can do the job themselves.
That ability will decay of course and you will be managing with the best of them. Eh, I mean the worse :)