I don't understand what they want to do, but I hope it won't be stupid, useless AI chat that sends all user data to OpenAI without disclosing it to the user. Nobody needs to talk to a computer program, and "AI chats" are the worst usage of otherwise important technology.
A good use of AI would be, for example:
- high-quality translation of a foreign text when you point a camera at it. Could be useful for travelers
- recognizing and reading aloud the text from camera image for people who have bad eyesight
- recognizing speech for people who have hearing loss
- image search, for example determining types of plants and insects and dog breeds
At least with their previous features it's been possible to set the address to any custom services you may have running, remote or local. For example Against all expectations I had I've actually found the highlight -> explain tool to be useful when sent to my local vLLM instance with a template I like.
Why not google/ddg/bing etc. them? That's on the context menu too, but LLMs seem uniquely suited at some problems like acronyms that are shared across many fields but different meanings, highlighting a sentence turns out the right acronyms very fast where search engines would take several attempts and is what I used to do previously.
I don't understand what they want to do, but I hope it won't be stupid, useless AI chat that sends all user data to OpenAI without disclosing it to the user. Nobody needs to talk to a computer program, and "AI chats" are the worst usage of otherwise important technology.
A good use of AI would be, for example:
- high-quality translation of a foreign text when you point a camera at it. Could be useful for travelers
- recognizing and reading aloud the text from camera image for people who have bad eyesight
- recognizing speech for people who have hearing loss
- image search, for example determining types of plants and insects and dog breeds
- checking grammar and style in text input boxes
At least with their previous features it's been possible to set the address to any custom services you may have running, remote or local. For example Against all expectations I had I've actually found the highlight -> explain tool to be useful when sent to my local vLLM instance with a template I like.
Why not google/ddg/bing etc. them? That's on the context menu too, but LLMs seem uniquely suited at some problems like acronyms that are shared across many fields but different meanings, highlighting a sentence turns out the right acronyms very fast where search engines would take several attempts and is what I used to do previously.
LLMs can make things up though.
Of course, it's generally useful for things you can verify, which for acronyms is easy as long as you find the words
Neither this nor the blog post explains what this feature is actually supposed to do?
So, basically:
Firefox users: "Stop cramming AI into my browser!"
Mozilla: "How 'bout I do, anyway?"
From the blog page at https://blog.mozilla.org/en/firefox/ai-window/
it looks like it's an explicit option to open a window with AI features and without, so you get a choice to enable the features if you want them
Blog post: https://blog.mozilla.org/en/firefox/ai-window/