How does this work? I thought it was probably powered by embeddings and maybe some more traditional search code, but I checked out the linked github repo and I didn't see any model/inference code. The public code is a wrapper that communicates with your commercial API?
Some searches work like magic and others seem to veer off target a lot. For example, "sculpture" and "watercolor" worked just about how I'd expect. "Lamb" showed lambs and sheep. But "otter" showed a random selection of animals.
It is powered by Mixedbread Search which is powered by our model Omni. Omni is multimodal (text, video, audio, images) and multi vector, which helps us to capture more information.
The search is in beta and we improving the model. Thank you for reporting the queries which are not working well.
Edit: Re the otter, I just checked and I did not found otters in the dataset. We should not return any results if the model is not sure to reduce confusion.
There's at least a little bit of otter in the data. The one relevant result I saw was "Plate 40: Two Otters and a Beaver" by Joris Hoefnagel.
I also expected semantic search to return similar results for "fireworks" and "pyrotechnics," since the latter is a less common synonym for the former. But I got many results for fireworks and just one result for pyrotechnics.
This is still impressive. My impulse is to poke at it with harder cases to try to reason about how it could be implemented. Thanks for your Show HN and for replying to me!
If you find more such cases please feel free to send them over to aamir at domain name of the Show HN. I would love to see those cases and see how we can improve on them. Thank you so much for the feedback.
I'll pile on since these are useful. Searching for "fingers and holes" did find me some nice hand drawings, but the real gold at the national gallery to me is the Bruce Nauman. The nga.gov search knew what I wanted.
This is neat, not sure how to report queries that are working poorly as you have mentioned. But when I search "Waltz" I am presented with Kitchen Utensils and only one piece of dancing folks. Presumably this is due to the Artist's name being 'Walton'.
We will add a feedback form tomorrow morning. For now please feel free to write to aamir at domain name of the page. thank you so much! this helps us a lot.
The results for "Mark Rothko", "Paintings by Mark Rothko", "Paintings similar to mark rothko" etc does not bring up anything that I was expecting. NGA has a large collection of Rothko paintings but none of them come up.
We are right now not including the artist name. Which will be done in the next iteration of the model (next week). Right now the search is only based on what the model can "see". And it seems like that the model does not understand the art of Mark Rothko.
The next version can see the image and read the metadata.
A bit more context: We are include everything in the latent space (embeddings) without trying to maintain multiple indexes and hack around things. There is still a huge mountain to climb. But this one seems really promising.
And this seems like a hard limitation of this approach as art (v craft) is concerned with interpretation and reception whereas this is more like unsplash-for-galleries in that the searches have to be very literal I guess? (eg search for something abstract, like 'dreams', something that you will find depicted in the collection, produces quite the mixed bag of results).
A search for : "character studies of old farmers" yielded good results.
The results are drawings / engravings, which may reflect the balance of the collection, and perhaps this subject is more used in practice than in marketable oil paintings.
Since this is a semantic search, using a vector embedding, it will handle meanings better than a text search, which would handle names better.
Ketika kode dan kanvas bertemu — sebuah pencarian tak sekadar kata, tapi rasa. Di antara lukisan dan batang piksel, mesin mencoba memahami jawaban yang tak terucap.
How does this work? I thought it was probably powered by embeddings and maybe some more traditional search code, but I checked out the linked github repo and I didn't see any model/inference code. The public code is a wrapper that communicates with your commercial API?
Some searches work like magic and others seem to veer off target a lot. For example, "sculpture" and "watercolor" worked just about how I'd expect. "Lamb" showed lambs and sheep. But "otter" showed a random selection of animals.
It is powered by Mixedbread Search which is powered by our model Omni. Omni is multimodal (text, video, audio, images) and multi vector, which helps us to capture more information.
The search is in beta and we improving the model. Thank you for reporting the queries which are not working well.
Edit: Re the otter, I just checked and I did not found otters in the dataset. We should not return any results if the model is not sure to reduce confusion.
There's at least a little bit of otter in the data. The one relevant result I saw was "Plate 40: Two Otters and a Beaver" by Joris Hoefnagel.
I also expected semantic search to return similar results for "fireworks" and "pyrotechnics," since the latter is a less common synonym for the former. But I got many results for fireworks and just one result for pyrotechnics.
This is still impressive. My impulse is to poke at it with harder cases to try to reason about how it could be implemented. Thanks for your Show HN and for replying to me!
If you find more such cases please feel free to send them over to aamir at domain name of the Show HN. I would love to see those cases and see how we can improve on them. Thank you so much for the feedback.
neither "blue pictures" nor "multiples" worked well.
thank you for reporting these. we will improve on them for the next iteration.
I'll pile on since these are useful. Searching for "fingers and holes" did find me some nice hand drawings, but the real gold at the national gallery to me is the Bruce Nauman. The nga.gov search knew what I wanted.
I built a toy version of something like this a couple-ish years ago for a hackathon. I wrote up a blog of how I did it back then for anyone interested: https://www.patrickogilvie.com/engineering/Image_Search_Engi...
Would be interesting to know how relevant that approach is now.
This is neat, not sure how to report queries that are working poorly as you have mentioned. But when I search "Waltz" I am presented with Kitchen Utensils and only one piece of dancing folks. Presumably this is due to the Artist's name being 'Walton'.
We will add a feedback form tomorrow morning. For now please feel free to write to aamir at domain name of the page. thank you so much! this helps us a lot.
Tried "Images of german shepherds" and not one on the page of 16
In case anyone wants to do this themselves, check out the pipeline here: https://github.com/isc-nmitchko/iris-document-search
Colnomic and nvidia models are great for embedding images and MUVERA can transform those to 1D vectors.
> check out the pipeline here
“the pipeline” - seems like this is just a personal hackathon project?
Why these models vs other multimodals? Which “nvidia models”?
Yale has an amazing one, worth looking at: https://lux.collections.yale.edu/
Is that a multi-modal search? Or just textual matching?
I couldn't find any examples that couldn't be explained by simple text matches.
I love old stereograms, and was happy to find a couple using this tool!
Works really well for some artist names (rembrandt, whistler) and exceedingly poorly for others (john singer sargent).
It would be nice if took you to the NGA page about the item. I cant even copy the text easily for easy search.
"Images of german shepherds" never fails to provide some humor.
Thank you for pointing this out. We will add this tomorrow morning.
The results for "Mark Rothko", "Paintings by Mark Rothko", "Paintings similar to mark rothko" etc does not bring up anything that I was expecting. NGA has a large collection of Rothko paintings but none of them come up.
This NGA link returns over a thousand pieces by Rothko: https://www.nga.gov/artists/1839-mark-rothko/artworks
We are right now not including the artist name. Which will be done in the next iteration of the model (next week). Right now the search is only based on what the model can "see". And it seems like that the model does not understand the art of Mark Rothko.
The next version can see the image and read the metadata.
A bit more context: We are include everything in the latent space (embeddings) without trying to maintain multiple indexes and hack around things. There is still a huge mountain to climb. But this one seems really promising.
And this seems like a hard limitation of this approach as art (v craft) is concerned with interpretation and reception whereas this is more like unsplash-for-galleries in that the searches have to be very literal I guess? (eg search for something abstract, like 'dreams', something that you will find depicted in the collection, produces quite the mixed bag of results).
A search for : "character studies of old farmers" yielded good results. The results are drawings / engravings, which may reflect the balance of the collection, and perhaps this subject is more used in practice than in marketable oil paintings.
Since this is a semantic search, using a vector embedding, it will handle meanings better than a text search, which would handle names better.
Congrats on the launch guys. I remember meeting ya'll in SF. What happened to your HF model/project?
there is a lot coming
love that a search for 'chill vibes sculpture' returned a very chill set of results. nice step change in art search capabilities
Is it possible to add other data sources?
yes, in which one would be interested?
hey, your service is back up again!!! Mixedbread was my favorite tool for so long since your pivot, and I'm so glad y'all are back
We have a lot more things coming up soon. It just took us some time building Mixedbread Search.
Ketika kode dan kanvas bertemu — sebuah pencarian tak sekadar kata, tapi rasa. Di antara lukisan dan batang piksel, mesin mencoba memahami jawaban yang tak terucap.