from an an average developer perspective, nostr is interesting because it's "just" a digitally signed json data structure sent over a websocket. reading the spec [1] for creating a simple nostr client (aka "nip-1"), my average developer brains thinks: i could do that.
i don't get that same feeling when reading atproto or activitypub docs. ultimately, there's a reason why all these protocols get complicated at scale, but in the simple case, nostr is very easy to make a client for and start playing with.
nostr feels like a good example case for gall's law: "a complex system that works is invariably found to have evolved from a simple system that worked."
What questions do you have about AT? I agree its docs are mostly “bad” and hard to understand. I find the actual tech approachable so happy to answer more concrete questions.
Tools like http://pdsls.dev in particular can be helpful to see how things fit together.
i think it really is as simple as boiling it down into a doc that looks like nip-1 and saying, "this is the absolute minimum amount you need to understand and implement to start sending messages on an AT-based network." -- not from a user perspective, but from an average developer perspective.
i know eventually i'd need to implement a ton more than the absolute bare minimum, but my gut-feeling "average developer brain" says nostr's absolute minimum feels smaller that AT's absolute minimum. i guess i'm looking for an AT doc for devs that shows the absolute minimum for creating a client that is equally approachable as NIP-1.
thanks. also, fwiw, i'm also a very a happy AT user (@hugs.bsky.social) besides also being a happy nostr user.
i appreciate bsky's focus on user ux and community building and look forward to seeing more sharing of ideas between nostr and AT.
edit to add: to nerd-snipe my brain into wanting to make stuff with AT (or any future protocol) is to focus on a quick-start or tutorial showing the absolute minimal client to send one message.
once i can do that... i'm ready to learn all the rest of the vocabulary and server-side stuff, but not until i can send one simple message from a barely functional minimal client.
i like seeing a bit of the raw, low-level protocol first. a few curl examples are perfect for understanding what’s really happening under the hood. once i get that, i'm happy to use a library to handle all the edge cases.
but starting with a library tutorial makes me wonder how many stacks of turtles are being hidden. if i can see the turtles upfront, i'll appreciate what the library does for me -- and i'll have a better sense of how to debug when things break.
I kind of feel like you’re taking one of the specs from nostr - the first one written - and calling that the whole protocol. Then you’re comparing all of the atproto specs to that one spec.
The substantive difference is that we didn’t do a mix & match spec process because we knew the ambiguity of spec support causes problems, just as it did with XMPP. Protocol specs only get implemented a few times. The meaningful developer choices are in schemas and business logic.
But that's essentially the whole protocol. You can implement a client or a server reading only NIP-01 and it will be able to interoperate with the rest of Nostr.
Reading and implementing NIP-01 can be done in an afternoon (or a weekend if you're taking your time), and it gets you relays that can accommodate multiple clients and applications. From the client perspective, only implementing NIP-01 gets you a simple Twitter clone with an identity that belongs to you.
the spirit of my comment was more psychological than technical. nip-1 successfully nerd-sniped my brain into thinking it was easy to get started with a simple, barely functional client. (even though, you're right, at scale, everything gets complicated and is not easy.)
perhaps this a roundabout way of hoping there is already a developer-focused quick start or tutorial for making a barely functional AT client. it either already exists, but i didn't look hard enough for it, or it might only be one chatgpt or claude prompt away.
Both of these systems are rebellions against the structure of secure-scuttlebot, but took different paths as they rebelled.
Beyond using different cryptography, the biggest difference between the "ATProto System" and the "Nostr System" is that Jay Graber wanted to account for deletes and the re-organization of the message structure of an entire feed.
In early ATProto, aka smor-serve, https://github.com/arcalinea/smor-serve Jay didn't like that we couldn't delete a message on SSB so she centralized all of the operations into a "repo" where we can modify our social feed as much as we want, including even backdating posts. We can see how that evolved into how ATProto currently works today by browsing a repo with pdsls.
For Nostr NIP-01 to work, we generate a keypair, sign a message and blast it at a relay. There's no structure at all to how the messages are organized. Messages are out there once they are sent to the relays. This lack of structure leads to all kinds of issues about how one develops a strategy for syncing an entire history of a feed.
Both of these systems have developed into far larger complex systems that are impossible to hold in anyone's mind at this point, but the key difference is being able to delete a message. Most of the complexity in the "ATProto System" results from accounting for a message that one sends and then wants to unsend later. This is why everyone complains that Bluesky is centralized on the AppView/Index layer. But it's also centralized at the PDS layer.
I have theories about how to solve these problems and I'm happy to discuss endlessly at your next dinner party.
The more nerds that get sniped by a simple-seeming protocol, the more likely it is to catch on. Hitting a 100 page spec doc full of XML and links to other specs is a big de-motivator to start hacking on the protocol.
Yeah, although I would argue that there are far fewer moving pieces in Nostr than there are in ATProto and that's part of why it's so simple - it's just clients and relays. That's it!
Edit: another thing I thought about just now is that you don't really have to worry about implementing most NIPs - many are not relevant if you're just building an application. All the Bitcoin Lightning Network stuff, for example, or private messaging, Blossom, etc.
yes, "there are too many NIPs!" feels like a red herring. at the moment, as a developer, i feel comfortable picking and choosing which NIPs i might want to use for whatever i'm building. but i can also understand why that might be a little confusing/frustrating to others. might be a education/communication issue more than anything else.
both projects are "controlled chaos", where nostr is a little heavier on the "chaos", atproto is a little heavier on the "control".
Been thinking about this lately and ultimately, I'm thinking that -- taking into account what we know about "federation" -- both the Nostr and ATProto models are generally pointless because they attack a problem with more complicated tech that must be solved with OR without that tech anyway.
Someone said it really well; if your solution relies on "maybe people will learn about or do new complex thing X" it's just not likely to take off.
But for the sake of argument, let's try going down that road for this. Along the way you'll be communicating with people, building trust, etc etc.
But now YOU'VE ALREADY DONE THE THING YOU'RE trying to optimize for, and for which we already have an extremely resilient model, aka Mastodon-which-is-very-analogous-to-email. At that point, just make a mastodon server or servers with with those people.
It just feels like the smart bet is doing that analogously to email, a model that definitely works, then trying to do the same thing PLUS invent a whole new idea of "take everything with you" at the user level.
If I'd wanted my user account tied to a server controlled by somebody else, I'd just use Twitter. Mastodon isn't solving any problems here.
The beauty of Nostr is that it turns the server into a dumb relay, the server controls and owns nothing and you can replace it with another one at anytime or broadcast to multiple at once to begin with. The user is in full control and everything is held together by public-key crypto.
The magic moment is importing your secret key into an alternate client and all your contacts, posts, and feed populate from the data stored in the relays.
Is it practical for an individual with a $4 VPS to spin up a mastadon server + front end client for their own use and have it interact with existing servers? Curious how much friction there is that users end up in on someone else's machine
AT model is very different from Mastodon or email. It’s much closer spiritually to RSS and plain old web.
Mastodon is “many copies of the same app emailing each other”. There’s no global shared view of the network so you can’t have features like globally accurate like counts, shared identity, global search, algorithmic feeds across instances, etc.
On the other hand, in AT, the idea is just that apps aggregate information from different repos. So each application’s server has information aggregated from the entire network. Everybody sees the same consistent information; apps exist to separate experiences rather than communities.
For example, Tangled (https://tangled.org) and Leaflet (https://leaflet.pub) are AT apps, but they’re nothing similar to “mastodon servers”. These are complete apps that implement different experiences but on the same global network.
Crucially, normal people don’t need to “buy into” the protocol stuff with AT. Most Bluesky users don’t know what AT is and don’t care about it; they’re just using the app. There’s interesting crossovers you can do (each AT app sees each other AT app’s public data) which do bleed into the user experience (eg my Tangled avatar is actually populated from Bluesky) but overall apps compete on their merit with centralized apps.
> It’s much closer spiritually to RSS and plain old web
What do you mean by this? ATProto requires a giant indexing database that has access to every post in the network. Mastodon is more like a feed reader—you only get notified about the posts you care about. How is needing a giant database that knows about every RSS feed in the world closer to the plain old web?
RSS is a way to aggregate data from many sites into one place. AT lets you do the same, but with bells and whistles (the data is signed and typed, and there's a realtime stream in addition to pulling on demand). If you're forced to describe AT via existing technologies, AT is basically like RSS for typed JSON in Git over HTTP or WebSockets that scales to millions of users.
It is completely up to you what you decide to index. If you want to build an app that listens to records of "Bluesky post" type that are created only by people you follow, you absolutely can.
The reason you don't see more of these is because an isolated experience is... well, isolated. So people are less interested in running something like this compared to, say, a whole new AT app. But AT can scale down to Mastodon-like use cases too.
>ATProto requires a giant indexing database that has access to every post in the network.
Only if you want to index every post, i.e. if you want to run a full-scale social app for millions of users. As an app builder, you get to choose what you index.
For a start, you probably only want to store the records relevant to your app. For example, I doubt that Tangled (https://tangled.org/), which is an AT app, has a database with every Bluesky post. That seems absurd because Tangled is focused on a completely different use case — a social layer around Git. So Tangled only indexes records like "Tangled repo", "Tangled follow", "Tangled star", and so on.
Naturally, Tangled wants to index all posts related to Tangled — that's just how apps work. If you wanted to build a centralized app, you'd also want it to contain the whole database of what you want the app to show. This isn't specific to AT, that's just common sense—to be able to show every possible post on demand with aggregated information (such as like counts), you have to index that information, hit someone else's index, or fetch posts from the source (but then you won't know the aggregated like counts).
That said — if you want to build a copy of a specific app (like Bluesky) but filtered down to just the people you follow (with no global search, algorithmic feeds, etc), you absolutely can, as I've linked earlier. Or you can build something hybrid relying on global caches, or some other subset of the network (say, last 2 weeks of posts). How you do indexing is up to you. You're the developer here.
> The reason you don't see more of these is because an isolated experience is... well, isolated.
I don't understand why you become isolated once you've built your own app, it it because the bluesky firehouse has to decide to index posts I make on my server? I guess I'm asking how does an application decide which sources to index from, just anyone advertising that they are serving that lexicon? Why then would I become isolated by virtue of hosting only data I want to host/indexing only feeds I care to index?
i'm very curious about tangled. i'm building a new thing (tl;dr: an e2e testing and monitoring service) and hope to add more distributed/decentralized functionality into its core. i had been leaning heavily towards using nostr at the core, but it's nice to see atproto-based examples i can learn from, too.
I've been doing some exploratory implementation using ATProto and the Bluesky server. It strikes me as a bit over engineered, but I'd take that over Ruby on Rails and Node.js, especially if it needs to turn into a product.
whats with the sudden upsurge in interest in ATProto related stuff on HN? Not that I am complaining. I am glad to see something else take AI's spot but just curious. Last month or so has been very busy with something or the other ATProto related
just a theory, but as atproto matures, there are now other example projects using the protocol for other things besides "distributed twitter clone". for example, tangled was talked about yesterday. [1]
and that probably came up because more people are wondering about the future of github as it becomes more integrated into microsoft. as things become more centralized, interest in decentralization goes up.
To give you an example, https://slices-teal-relay.bigmoves.deno.net/ is a demo of Slices showing the latest teal.fm records (like Last.fm scrobbles). The thing is, teal.fm is not even launched as an app. It's just its developers already listen to music through it, the records get created in their repos, and so any other developer can aggregate their records and display them.
It's a web of realtime hyperlinked JSON created by different apps. That's exciting.
> The Authenticated Transfer Protocol, aka atproto, is a decentralized protocol for large-scale social web applications.
I must not be the target audience for this older article. Several paragraphs in, I had no idea what this was about. That’s how ATProto describes itself.
I wrote an intro to AT that should be broadly accessible and states the problems before the solutions. You might find it helpful: https://overreacted.io/open-social/
The author wanted to take down their account (to take a break) so this is actually working as designed. The takedown was issued from the author’s repository (which they control), and the downstream app server acknowledged the request.
I'm not sure I would necessarily draw that conclusion.
If the author intentionally deactivated their Bluesky account, does the fact that he can successfully do that on Bluesky lead to the conclusion that it's less resilient?
I think you've nailed a problem with all of these, they would make "deleting your stuff" HARDER. What's stopping the rogue node from saving all your stuff forever?
I think "trying to make a thing that can work through rogue or stupid nodes" is just prohibitively harder than "work on making nodes more reliable" (which I absolutely grant is extremely hard.)
The comment makes so little sense that it could only be intended as a dumb gotcha from someone who thinks they're fighting in some sort of culture war about the Twitter succession. Ignoring is better than encouraging.
that is the best and worst aspect or nostr. it is a very interesting, semi-chaotic box of new toys to play with. reminds me of the early web. (the second best/worst aspect of nostr is key rotation.)
from an an average developer perspective, nostr is interesting because it's "just" a digitally signed json data structure sent over a websocket. reading the spec [1] for creating a simple nostr client (aka "nip-1"), my average developer brains thinks: i could do that.
i don't get that same feeling when reading atproto or activitypub docs. ultimately, there's a reason why all these protocols get complicated at scale, but in the simple case, nostr is very easy to make a client for and start playing with.
nostr feels like a good example case for gall's law: "a complex system that works is invariably found to have evolved from a simple system that worked."
[1]: https://github.com/nostr-protocol/nips/blob/master/01.md
What questions do you have about AT? I agree its docs are mostly “bad” and hard to understand. I find the actual tech approachable so happy to answer more concrete questions.
Tools like http://pdsls.dev in particular can be helpful to see how things fit together.
i think it really is as simple as boiling it down into a doc that looks like nip-1 and saying, "this is the absolute minimum amount you need to understand and implement to start sending messages on an AT-based network." -- not from a user perspective, but from an average developer perspective.
i know eventually i'd need to implement a ton more than the absolute bare minimum, but my gut-feeling "average developer brain" says nostr's absolute minimum feels smaller that AT's absolute minimum. i guess i'm looking for an AT doc for devs that shows the absolute minimum for creating a client that is equally approachable as NIP-1.
Thanks, that’s helpful. I’ll see if I can write something in that spirit later.
thanks. also, fwiw, i'm also a very a happy AT user (@hugs.bsky.social) besides also being a happy nostr user.
i appreciate bsky's focus on user ux and community building and look forward to seeing more sharing of ideas between nostr and AT.
edit to add: to nerd-snipe my brain into wanting to make stuff with AT (or any future protocol) is to focus on a quick-start or tutorial showing the absolute minimal client to send one message.
once i can do that... i'm ready to learn all the rest of the vocabulary and server-side stuff, but not until i can send one simple message from a barely functional minimal client.
Would it be ok to use a library or is the requirement to keep it to raw primitives like curl?
i like seeing a bit of the raw, low-level protocol first. a few curl examples are perfect for understanding what’s really happening under the hood. once i get that, i'm happy to use a library to handle all the edge cases.
but starting with a library tutorial makes me wonder how many stacks of turtles are being hidden. if i can see the turtles upfront, i'll appreciate what the library does for me -- and i'll have a better sense of how to debug when things break.
Absolutely. I think it’s a great constraint actually. I have a few other pieces in the backlog but I’ll keep this one in mind.
This isn’t quite what you want but should illuminate at least the “fetch on demand” part in detail: https://overreacted.io/where-its-at/
yeah, that looks like a good base for a simplified remix. thanks!
The docs are bad, sadly that’s true.
I kind of feel like you’re taking one of the specs from nostr - the first one written - and calling that the whole protocol. Then you’re comparing all of the atproto specs to that one spec.
The substantive difference is that we didn’t do a mix & match spec process because we knew the ambiguity of spec support causes problems, just as it did with XMPP. Protocol specs only get implemented a few times. The meaningful developer choices are in schemas and business logic.
But that's essentially the whole protocol. You can implement a client or a server reading only NIP-01 and it will be able to interoperate with the rest of Nostr.
Reading and implementing NIP-01 can be done in an afternoon (or a weekend if you're taking your time), and it gets you relays that can accommodate multiple clients and applications. From the client perspective, only implementing NIP-01 gets you a simple Twitter clone with an identity that belongs to you.
the spirit of my comment was more psychological than technical. nip-1 successfully nerd-sniped my brain into thinking it was easy to get started with a simple, barely functional client. (even though, you're right, at scale, everything gets complicated and is not easy.)
perhaps this a roundabout way of hoping there is already a developer-focused quick start or tutorial for making a barely functional AT client. it either already exists, but i didn't look hard enough for it, or it might only be one chatgpt or claude prompt away.
Yeah that’s fair
This so much. ATProto just seems so complicated in comparison.
Both of these systems are rebellions against the structure of secure-scuttlebot, but took different paths as they rebelled.
Beyond using different cryptography, the biggest difference between the "ATProto System" and the "Nostr System" is that Jay Graber wanted to account for deletes and the re-organization of the message structure of an entire feed.
In early ATProto, aka smor-serve, https://github.com/arcalinea/smor-serve Jay didn't like that we couldn't delete a message on SSB so she centralized all of the operations into a "repo" where we can modify our social feed as much as we want, including even backdating posts. We can see how that evolved into how ATProto currently works today by browsing a repo with pdsls.
For Nostr NIP-01 to work, we generate a keypair, sign a message and blast it at a relay. There's no structure at all to how the messages are organized. Messages are out there once they are sent to the relays. This lack of structure leads to all kinds of issues about how one develops a strategy for syncing an entire history of a feed.
Both of these systems have developed into far larger complex systems that are impossible to hold in anyone's mind at this point, but the key difference is being able to delete a message. Most of the complexity in the "ATProto System" results from accounting for a message that one sends and then wants to unsend later. This is why everyone complains that Bluesky is centralized on the AppView/Index layer. But it's also centralized at the PDS layer.
I have theories about how to solve these problems and I'm happy to discuss endlessly at your next dinner party.
nostr can get plenty complicated, too, but nostr successfully tricked me into thinking it was simple enough to get started.
The more nerds that get sniped by a simple-seeming protocol, the more likely it is to catch on. Hitting a 100 page spec doc full of XML and links to other specs is a big de-motivator to start hacking on the protocol.
Yeah, although I would argue that there are far fewer moving pieces in Nostr than there are in ATProto and that's part of why it's so simple - it's just clients and relays. That's it!
Edit: another thing I thought about just now is that you don't really have to worry about implementing most NIPs - many are not relevant if you're just building an application. All the Bitcoin Lightning Network stuff, for example, or private messaging, Blossom, etc.
yes, "there are too many NIPs!" feels like a red herring. at the moment, as a developer, i feel comfortable picking and choosing which NIPs i might want to use for whatever i'm building. but i can also understand why that might be a little confusing/frustrating to others. might be a education/communication issue more than anything else.
both projects are "controlled chaos", where nostr is a little heavier on the "chaos", atproto is a little heavier on the "control".
true, however
>You can read and leave comments on this post here on Bluesky, or here on Nostr, or even here on Mastodon.
the only link that doesn't work is the Nostr one, the content doesn't load for me
That's not an issue with Nostr, it's an issue with the client that they decided to link to. Here's the content loading just fine in Primal: https://primal.net/e/nevent1qqsqfeuezj38syyppscdpu0c0zwermxl...
Been thinking about this lately and ultimately, I'm thinking that -- taking into account what we know about "federation" -- both the Nostr and ATProto models are generally pointless because they attack a problem with more complicated tech that must be solved with OR without that tech anyway.
Someone said it really well; if your solution relies on "maybe people will learn about or do new complex thing X" it's just not likely to take off.
But for the sake of argument, let's try going down that road for this. Along the way you'll be communicating with people, building trust, etc etc.
But now YOU'VE ALREADY DONE THE THING YOU'RE trying to optimize for, and for which we already have an extremely resilient model, aka Mastodon-which-is-very-analogous-to-email. At that point, just make a mastodon server or servers with with those people.
It just feels like the smart bet is doing that analogously to email, a model that definitely works, then trying to do the same thing PLUS invent a whole new idea of "take everything with you" at the user level.
If I'd wanted my user account tied to a server controlled by somebody else, I'd just use Twitter. Mastodon isn't solving any problems here.
The beauty of Nostr is that it turns the server into a dumb relay, the server controls and owns nothing and you can replace it with another one at anytime or broadcast to multiple at once to begin with. The user is in full control and everything is held together by public-key crypto.
The magic moment is importing your secret key into an alternate client and all your contacts, posts, and feed populate from the data stored in the relays.
Is it practical for an individual with a $4 VPS to spin up a mastadon server + front end client for their own use and have it interact with existing servers? Curious how much friction there is that users end up in on someone else's machine
He's talking about Nostr here, not mastodon.
AT model is very different from Mastodon or email. It’s much closer spiritually to RSS and plain old web.
Mastodon is “many copies of the same app emailing each other”. There’s no global shared view of the network so you can’t have features like globally accurate like counts, shared identity, global search, algorithmic feeds across instances, etc.
On the other hand, in AT, the idea is just that apps aggregate information from different repos. So each application’s server has information aggregated from the entire network. Everybody sees the same consistent information; apps exist to separate experiences rather than communities.
For example, Tangled (https://tangled.org) and Leaflet (https://leaflet.pub) are AT apps, but they’re nothing similar to “mastodon servers”. These are complete apps that implement different experiences but on the same global network.
Crucially, normal people don’t need to “buy into” the protocol stuff with AT. Most Bluesky users don’t know what AT is and don’t care about it; they’re just using the app. There’s interesting crossovers you can do (each AT app sees each other AT app’s public data) which do bleed into the user experience (eg my Tangled avatar is actually populated from Bluesky) but overall apps compete on their merit with centralized apps.
Hope that makes sense. See https://overreacted.io/open-social/ for a longer article I wrote about AT with visual explanations.
> It’s much closer spiritually to RSS and plain old web
What do you mean by this? ATProto requires a giant indexing database that has access to every post in the network. Mastodon is more like a feed reader—you only get notified about the posts you care about. How is needing a giant database that knows about every RSS feed in the world closer to the plain old web?
>What do you mean by this?
RSS is a way to aggregate data from many sites into one place. AT lets you do the same, but with bells and whistles (the data is signed and typed, and there's a realtime stream in addition to pulling on demand). If you're forced to describe AT via existing technologies, AT is basically like RSS for typed JSON in Git over HTTP or WebSockets that scales to millions of users.
It is completely up to you what you decide to index. If you want to build an app that listens to records of "Bluesky post" type that are created only by people you follow, you absolutely can.
See https://bsky.app/profile/why.bsky.team/post/3m2fjnh5hpc2f (which runs locally and indexes posts relevant to you) and https://reddwarf.whey.party/ (which doesn't have a database at all and pulls data from original servers on demand + using https://constellation.microcosm.blue/ for some queries).
The reason you don't see more of these is because an isolated experience is... well, isolated. So people are less interested in running something like this compared to, say, a whole new AT app. But AT can scale down to Mastodon-like use cases too.
>ATProto requires a giant indexing database that has access to every post in the network.
Only if you want to index every post, i.e. if you want to run a full-scale social app for millions of users. As an app builder, you get to choose what you index.
For a start, you probably only want to store the records relevant to your app. For example, I doubt that Tangled (https://tangled.org/), which is an AT app, has a database with every Bluesky post. That seems absurd because Tangled is focused on a completely different use case — a social layer around Git. So Tangled only indexes records like "Tangled repo", "Tangled follow", "Tangled star", and so on.
Naturally, Tangled wants to index all posts related to Tangled — that's just how apps work. If you wanted to build a centralized app, you'd also want it to contain the whole database of what you want the app to show. This isn't specific to AT, that's just common sense—to be able to show every possible post on demand with aggregated information (such as like counts), you have to index that information, hit someone else's index, or fetch posts from the source (but then you won't know the aggregated like counts).
That said — if you want to build a copy of a specific app (like Bluesky) but filtered down to just the people you follow (with no global search, algorithmic feeds, etc), you absolutely can, as I've linked earlier. Or you can build something hybrid relying on global caches, or some other subset of the network (say, last 2 weeks of posts). How you do indexing is up to you. You're the developer here.
> The reason you don't see more of these is because an isolated experience is... well, isolated.
I don't understand why you become isolated once you've built your own app, it it because the bluesky firehouse has to decide to index posts I make on my server? I guess I'm asking how does an application decide which sources to index from, just anyone advertising that they are serving that lexicon? Why then would I become isolated by virtue of hosting only data I want to host/indexing only feeds I care to index?
(Thanks in advance I do want to grok this...)
i'm very curious about tangled. i'm building a new thing (tl;dr: an e2e testing and monitoring service) and hope to add more distributed/decentralized functionality into its core. i had been leaning heavily towards using nostr at the core, but it's nice to see atproto-based examples i can learn from, too.
I've been doing some exploratory implementation using ATProto and the Bluesky server. It strikes me as a bit over engineered, but I'd take that over Ruby on Rails and Node.js, especially if it needs to turn into a product.
whats with the sudden upsurge in interest in ATProto related stuff on HN? Not that I am complaining. I am glad to see something else take AI's spot but just curious. Last month or so has been very busy with something or the other ATProto related
just a theory, but as atproto matures, there are now other example projects using the protocol for other things besides "distributed twitter clone". for example, tangled was talked about yesterday. [1]
and that probably came up because more people are wondering about the future of github as it becomes more integrated into microsoft. as things become more centralized, interest in decentralization goes up.
[1]: https://news.ycombinator.com/item?id=45543899
I think partially it's because momentum is picking up in the AT ecosystem.
Since all data lives in a single conceptual space, you start seeing community services like https://constellation.microcosm.blue/ (backlinks without running your own index), https://slices.network/ (indexes data you want and gives you a GraphQL/REST endpoint), independent relays (https://atproto.africa/), and so on.
To give you an example, https://slices-teal-relay.bigmoves.deno.net/ is a demo of Slices showing the latest teal.fm records (like Last.fm scrobbles). The thing is, teal.fm is not even launched as an app. It's just its developers already listen to music through it, the records get created in their repos, and so any other developer can aggregate their records and display them.
It's a web of realtime hyperlinked JSON created by different apps. That's exciting.
> The Authenticated Transfer Protocol, aka atproto, is a decentralized protocol for large-scale social web applications.
I must not be the target audience for this older article. Several paragraphs in, I had no idea what this was about. That’s how ATProto describes itself.
I wrote an intro to AT that should be broadly accessible and states the problems before the solutions. You might find it helpful: https://overreacted.io/open-social/
Bummer that all three bluesky links in the intro are dead links now, and the author's bluesky account appears to be deactivated:
https://bsky.app/profile/shreyanjain.net
By contrast, NOSTR comments continue to work just fine.
Quite telling between centralized vs decentralized environments. NOSTR is indeed more resilient.
The author wanted to take down their account (to take a break) so this is actually working as designed. The takedown was issued from the author’s repository (which they control), and the downstream app server acknowledged the request.
I'm not sure I would necessarily draw that conclusion.
If the author intentionally deactivated their Bluesky account, does the fact that he can successfully do that on Bluesky lead to the conclusion that it's less resilient?
I think you've nailed a problem with all of these, they would make "deleting your stuff" HARDER. What's stopping the rogue node from saving all your stuff forever?
I think "trying to make a thing that can work through rogue or stupid nodes" is just prohibitively harder than "work on making nodes more reliable" (which I absolutely grant is extremely hard.)
> What's stopping the rogue node from saving all your stuff forever?
Nothing. You must always have this in mind when posting online: It's impossible to ensure that data is deleted and gone forever.
The comment makes so little sense that it could only be intended as a dumb gotcha from someone who thinks they're fighting in some sort of culture war about the Twitter succession. Ignoring is better than encouraging.
nostr started as very simple but soon there are like millions of NIPs.
that is the best and worst aspect or nostr. it is a very interesting, semi-chaotic box of new toys to play with. reminds me of the early web. (the second best/worst aspect of nostr is key rotation.)