- You can see from the WHATNOT meeting agenda that it was a Mozilla engineer who brought it up last time.
- Opening a PR doesn't necessarily mean that it'll be merged. Notice the unchecked tasks - there's a lot to still do on this one. Even so, give the cross-vendor support for this is seems likely to proceed at some point.
It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
Disclaimer: I work on Chrome/Blink and I've also contributed a (very small) number of patches to libxml/libxslt.
It's not just a matter of replacing the libxslt; libxslt integrates quite closely with libxml2. There's a fair amount of glue to bolt libxml2/libxslt on to Blink (and WebKit); I can't speak for Gecko.
Even when there's no work on new XML/XSLT features, there's a passive cost to just having that glue code around since it adds quirks and special cases that otherwise wouldn't exist.
I think this discussion is quite reasonable, but it also highlights the power imbalance: If this stuff is decided in closed meetings and the bug trackers are not supposed to be places for community feedback, where can the community influence such decisions?
I think it depends on the spec. Some of the working groups still have mailing lists, some of them have GitHub issues.
To be completely honest, though, I'm not sure what people expect to get out of it. I dug into this a while ago for a rather silly reason and I found that it's very inside baseball, and unless you really wanted to get invested in it it seems like it'd be hard to meaningfully contribute.
To be honest if people are very upset about a feature that might be added or a feature that might be removed the right thing to do is probably to literally just raise it publicly, organize supporters and generally act in protest.
Google may have a lot of control over the web, but note that WEI still didn't ship.
If people are upset about xslt being removed, step 1 would have been to actually use it in a significant way on the web. Step 2 would have been to volunteer to maintain libxslt.
Everyone likes to complain as a user of open source. Nobody likes to do the difficult work.
I'm not that familiar with XSLT but isn't it already quite hobbled? Can it be used in a significant way? Or is this a chicken-egg problem where proving it's useful requires the implementation to be filled out first.
On the link in the post you can scroll down to someone’s comment with a few links to XSLT in action.
It’s been years since I’ve touched it, but clicking the congressional bill XML link and seeing a perfectly formatted and readable page reminded me of exactly why XSLT has a place. To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied - this could of course be backend or frontend, either way it’s a lot of engineering overhead for a task that, with XSLT, requires just a stylesheet.
WhatWG has a fairly well documented process for feature requests. Issues are not usually decided in closed meetings. But there’s a difference between constructive discussion and the stubborn shameless entitlement that some members of the community are displaying in their comments.
Fwiw the meetings aren't closed, unlike w3c the whatwg doesn't require paid membership to attend.
The bug trackers are also a fine place to provide community feedback. For example there's plenty of comments providing use cases that weren't hidden. But if you read the hidden ones (especially on the issue rather than PR) there's some truly unhinged commentary that rightly resulted in being hidden and unfortunately locking of the thread.
Ultimately the way the community can influence decisions is to not be completely unhinged.
Like someone else said the other way would be to just use XSLT in the first place.
Honestly, your chance to impact this decision was when you decided what technologies to use on your website, and then statistically speaking [1], chose not to use XSLT in the browser. If the web used it like crazy we would not be having this conversation.
Your other opportunity is to put together a credible plan to resource the XSLT implementations in the various browsers. I underline, highlight, bold, and italicize the word "credible" here. You are facing an extremely uphill battle from the visible lack of support for the development; any truly credible offer should have come many years ago. Big projects are well aware of the utility of last-minute, emotionally-driven offers of support in the midst of a burst of publicity, viz, effectively zero.
I don't know that the power is as imbalanced as people think here so much as a very long and drawn out conversation has been had by the web as a whole, on the whole the web has agreed this is not a terribly useful technology by vast bulk of implementation work, and this is the final closing chapter where the browsers are basically implementing the will of the web. The standard for removal isn't "literally 0 usage in the entire world", and whatever the standard is, if XSLT isn't on the "remove" side of it, that would just be a sign it needs to be tuned up because XSLT is a complete non-entity on the web. If you are not feeling like your voice is being respected it's because it's one of literally millions upon millions; what do you expect?
[1]: I know exceptions are reading this post, but you are exceptions. And not terribly common ones.
Statistically, how many websites are using webusb? I'm guessing fewer than xslt, which is used by e.g. the US Congress website.
I have a hard time buying the idea that document templating is some niche use-case compared to pretty much every modern javascript api. More realistically, lots of younger people don't know it's there. People constantly bemoan html's "lack" of client side includes or extensible component systems.
You seem to be assuming that I would argue against removing webusb. If it went through the same process and the system as a whole reached the same conclusion, I wouldn't fight it too hard personally.
There's probably half-a-dozen other things that could stand serious thought about removal.
There is one major difference though, which is that if you remove webusb, the functionality is just gone, whereas XSLT can be done through Javascript/WebASM just fine.
Document templating is obviously not a niche case. That's why we've got so many hundreds of them. We're not lacking in solutions for document templating, we're drowning in them. If XSLT stands out in its niche, it is as being a particularly bad choice, which is why nobody (to that first approximation we've all heard so much about) uses it.
My guess is that they'll shuffle people to PDF or move rendering to the server side, which is a common (and, with today's computing power, extremely cheap) way to generate HTML from XML.
Is it cheaper than sending XML and a stylesheet though?
Further, PDF and server-side are fine for achieving the same display, but it removes the XML of it all - that is to say, someone might be using the raw XML to lower tools, feeds, etc. if XSLT goes away and congress drops the XML links in favor of PDFs etc, that breaks more than just the pretty formatting
i just built a website in XSLT and implementing some form of client side include in XSLT is not easier than doing the same in javascript. while i agree with you that client side include is sorely missing in HTML, XSLT is not the answer to that problem. anyone who doesn't want to use javascript to implement client-side include, won't want to use XSLT either.
> If the web used it like crazy we would not be having this conversation.
It's been a standard part of the Web platform for years. The only question should be, "Is _anyone_ using it?", not whether it's being "used like crazy" or not.
A lot of very old SPA like heavy applications use XSLT. Basically, enterprise web applications (not websites) that predate fetch, rest, and targeted or still target Internet Explorer 5/6.
There was a time where the standard way to build a highly interactive SPA was using SOAP services on the backend combined with iframes on the front end that executed XSLT in the background to update the DOM.
Obviously such an approach is extremely out of date and you won't find it on any websites you use. But, a lot of critical enterprise software was built this way and is kind of stuck like this.
At first glance the library of congress link appears to be using server side XSLT, which would not be affected by this proposal.
The congress one appears to be the first legit example i have seen.
At first glance the congress use case does seem like it would be fully covered by CSS [you can attach CSS stylesheets to generic xml documents in a similar fashion to xslt]. Of course someone would have to make that change.
> I am very sure the people at google are aware of the rss feed usage.
No. No they aren't. As you can see in the discussion: https://github.com/whatwg/html/issues/11523 where the engineer who proposed this literally updates his "analysis" as people point out use cases he missed.
Quote:
--- start quote ---
albertobeta: there is a real-world and modern use case from the podcasting industry, where I work. Collectively, we host over 4.5 million RSS feeds. Like many other podcast hosting companies, we use XSLT to beautify our raw feeds and make them easier to understand when viewed in a browser.
Thanks for all of the comments, details, and information on this issue. It's clear that XSLT (and talk of removing it) strikes a nerve with some folks. I've learned a lot from the posts here.
--- end quote ---
> Don't confuse people disagreeing with you with people not understanding you.
You're angry you didn't get your way, but the googler's decision seems logical, i think most software developers maintaining a large software platform would have made a similar decision given the evidence presented (as evidenced by other web browsers making the same one).
The only difference here between most software is that google operates somewhat in the open. In the corporate world there would be some customer service rep to shield devs from the special interest group's tantrum.
Well, it's Google who jumped at the opportunity citing their own counters and stats.
Just like they did the last time when they tried to remove confirm/prompt[1] and were surprised to see that their numbers don't paint the full picture, as literally explicitly explained in their own docs: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
You'd think that the devs of the world's most popular browser would have a little more care than just citing some numbers, ignoring all feedback, and moving forward with whatever they want to do?
That's not completely wrong, but also misses some nuance. E.g. the thread mentions the fact that web support is still stuck at XSLT 1.0 as a reason for removal.
But as far as I know, there were absolutely zero efforts by browser vendors before to support newer versions of the language, while there was enormous energy to improve JavaScript.
I don't want to imply that if they had just added support for XSLT 3.0 then everyone would be using XSLT instead of JavaScript today and the latest SIMD optimizations of Chrome's XPath pipeline would make the HN front-page. The language is just too bad for that.
But I think it's true that there exists a feedback loop: Browsers can and do influence how much a technology is adopted, by making the tech less or more painful to use. Then turning around and saying no one is using the tech, so we'll remove it, is a bit dishonest.
Javascript was instantly a hit from the day it was released, and it grew from there.
XSLT never took off. Ever. It has never been a major force on the web, not even for five minutes. Even during the "XML all the things!" phase of the software engineering world, with every tailwind it would ever had, it was never a serious player.
There was, at no point, any reason to invest in it any farther.
Moreover, even if you push a button and rewrite history so that even so it was heavily invested in anyhow, I see no reason to believe it would have ever been a major force in that alternate history either. I would personally contend that it has always been a bad idea, and if anything, it has been unduly propped up by the browsers and overinvested in as it is. But perhaps less inflammatorily and more objectively, it has always been a foreign paradigm that most programmers have no experience in, and this was even more true in the "XML all the things!" era which predates the initial Haskell burst that pushed FP forward by a good solid decade, and the prospects of it ever being popular were never all that great.
i also don't see XSLT solving any problem that javascript could not solve. heck, if you rally need XSLT in the browser, using javascript you could even call some library like saxonjs, or you could run it webassembly.
True, but that raises the question, why don't the browsers do that? I think no one would object if they removed XSLT from the browser's core and instead loaded up some WASM/JavaScript implementation when some XSLT is actually encountered. Sort of like a "built-in extension".
Then browser devs could treat it like an extension (plus some small shims in the core) while the public API wouldn't have to change.
XSLT is interesting because it has a very different approach to parsing XML, and for some transformations the resulting code can be quite compact. in particular, you don't have an issue with quoting/escaping special characters most of the time while still being able to write XML/HTML syntax. but then JSX from react solves that too. so the longer you look at it the less the advantages of XSLT stand out.
You have basically your entire "framework" with no need to figure out how to set up a build environment because there is no build environment; it's just baked into the browser. Apparently in XSLT 3.0, the passthrough template is shortened to just `<xsl:mode on-no-match="shallow-copy"/>`. In XSLT 2.0+ you could also check against `base-uri(/)` instead of needing to pass in the current page with `<nav-menu current-page="foo.xhtml"/> and there's no `param` and `with-param` stuff needed. In modern XSLT 3.0, it should be able to be something more straightforward like:
And now you have a `<nav-menu/>` component that you can add to any page. So to the extent that you're using it to create simple website templates but you're not a "web dev", it works really well for people that don't want to go through all of the hoops that professional programmers deal with. Asking people to figure out react to make a static website is absurd.
wow, thank you. your first example is actually what i have been trying to do but i could not get it to work. i did search for examples or explanations for hours (spread over a week or so). i found the documentation of each of the parts and directives used, but i just could not figure out how to pull it together.
your last example is what i started out with, including the pass through template. you may remember this message from almost two months ago: https://news.ycombinator.com/item?id=44398626
one comment for the xslt 3 example: href="" doesn't disable the link. it's just turns into a link to self (which it would be anyways if the value was present). the href attribute needs to be gone completely to disable the link.
nodes you output don't have type "node-set" - instead, they're what is called a "result tree fragment". You can store that to a variable, and you can use that variable to insert the fragment into output (or another variable) later on, but you cannot use XPath to query over it.
Variables introduce an additional data-type into the expression language. This additional data type is called result tree fragment. A variable may be bound to a result tree fragment instead of one of the four basic XPath data-types (string, number, boolean, node-set). A result tree fragment represents a fragment of the result tree. A result tree fragment is treated equivalently to a node-set that contains just a single root node. However, the operations permitted on a result tree fragment are a subset of those permitted on a node-set. An operation is permitted on a result tree fragment only if that operation would be permitted on a string (the operation on the string may involve first converting the string to a number or boolean). In particular, it is not permitted to use the /, //, and [] operators on result tree fragments.
so using apply-templates on a variable doesn't work. this is actually where i got stuck before. i just was not sure because i could not verify that everything else was correct.
Ah, I could've sworn that it worked in some version of the page that I tried as I iterated on things, but it could be that the browser just froze on my previously working page and I fooled myself.
Adding xmlns:exsl="http://exslt.org/common" to your xsl:stylesheet and doing select="exsl:node-set($nav-menu-items)/item" seems to work on both Chrome and Librewolf.
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<?xml-stylesheet type="text/xsl" href="site.xsl"?>
<document name="about">
<title>About Us</title>
<content>
html content here, to be inserted without change
</content>
</document>
if i use the document() function, with nav-menu.xml looking like this:
It looks like it's related to your setting the default namespace xmlns="http://www.w3.org/1999/xhtml". You could either add a xmlns:example="http://example.org/templates" and then replace `item` with `example:item` everywhere, or you can override the default namespace within your variable's scope:
I think you also don't really need to set the default namespace to xhtml, so I believe you could remove that and not worry about namespaces at all (except for xsl and exsl).
The test is failing because it's `/about.xhtml` in the template but `about` outside. You'd either need to add a name attribute to item to compare on or make it match the href.
That should make your thing work if I haven't fooled myself again. :)
I think you also don't really need to set the default namespace to xhtml
you are right. i removed it, and it works. typical "copy from stackoverflow" error. these namespaces are a mystery and not intuitive at all. i suppose most people don't notice that because it only applies to xml data within the stylesheet. most people won't have that so they won't notice an issue. the less the better.
for the other error, my mistake, duh! in my original example in https://news.ycombinator.com/item?id=44961352 i am comparing $current/@name to a hardcoded value, so if i want to keep that comparison i have to add that value to the nav-menu data. or use a value that's already in there.
i went with adding a name="about" attribute to the nav-menu because it keeps the documents cleaner: <document name="about"> just looks better, and it also allows me to treat it like an ID that doesn't have to match the URL which allows renaming/moving documents around without having to change the content. (they might go from about.xhtml to about/index.xhtml for example)
i am also probably going to use the document() function instead of exsl:node-set() because having the menu data in a separate file in this case is also easier to manage. it's good to know about that option though. being able to iterate over some local data is a really useful feature. i'll keep that around as an example.
the final piece of the puzzle was:
<xsl:if test="position() != last()"> | </xsl:if>
to put a separator between the items, but not after.
that sorted, now it all works. thank you again.
btw, it's funny that we are turning hackernews into an xsl support forum. i guess i should write all that up into a post some day.
Yeah, unfortunately the one criticism of XSLT that you can't really deny is that there's no information out there about how to use it, so beyond the tiny amount of documentation on MDN, you kind of have to just figure out your own patterns. It feels a little unfair though that it basically comes down to "this doesn't have a mega-corporation marketing it". That and the devtools for it are utterly broken/left in the early 00s for similar reasons. You could imagine something could exist like the Godbolt compiler explorer for template expansion showing the input document on the left and output on the right with color highlighting for how things expanded, but instead we get devtools that barely work at all.
You're right on the href; maybe there's not a slick/more "HTML beginner friendly" way to get rid of the <xsl:choose> stuff even in 3.0. I have no experience with 3.0 though since it doesn't work.
I get a little fired up about the XSLT stuff because I remember being introduced to HTML in an intersession school class when I was like... 6? XSLT wasn't around at that time, but I think I maybe learned about it when I was ~12-13, and it made sense to me then. The design of all of the old stuff was all very normal-human approachable and made it very easy to bite a little bit more off at a time to make your own personal web pages. "Use React and JSON APIs" or "use SSR" seems to just be giving up on the idea that non-programmers should be able to participate in the web too. Should we do away with top level HTML/CSS while we're at it and just use DOM APIs?
There were lots of things in the XML ecosystem I didn't understand at the time (what in the world was the point of XSDs and what was a schema and how do you use them to make web pages? I later came to appreciate those as well after having to work as a programmer with APIs that didn't have schema files), but the template expansion thing to make new tags was easy to latch onto.
right, that's a big issue too. when the xsl breaks (in this case when i use <xsl:apply-templates select="$nav-menu-items/item">) i get an empty page and nothing telling me what could be wrong. if i remove the $ the page works, and the apply-templates directive is just left out.
That's only if the original document is an XHTML document that will have scripts loaded. Other XML documents, such as RSS feeds, will not have any support for JS, short of something silly like putting it in an iframe.
Community feedback is usually very ad hoc. Platform PMs will work with major sites, framework maintainers, and sometimes do discussions and polls on social sites. IOW, they try to go where the community that uses the features are, rather than stay on GitHub in the spec issues.
The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.
I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.
Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.
The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.
> I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.
HTTP has already had this since the 90s. Clients send the Accept HTTP header indicating which format they want and servers can respond with alternative representations. You can already respond with HTML for browsers and XML for other clients today. You don’t need the browser to know how to do the transformation.
This is not breaking the web, stop being so needlessly hyperbolic. XSLT use is absolutely tiny. If you removed it, >99.9% of the web wouldn’t even notice.
You're equivocating; "Don't break the Web" means what it has always meant, but you're not-so-subtly suggesting it means something else. Stop being a waste of time.
I actually found that particular response to be quite disappointing. It should give pause to those advocating removal of XSLT that these three totally disparate use cases could already be gracefully handled by a single technology which is:
* side effect free (a pure data to data transformation)
* stable, from a spec perspective, for decades
* completely client-side
Isn't this basically an A+ report card for any attempt at making a powerful general tool? The fact that the suggested solution in the absence of XSLT is to toil away at implementing application-specific solutions forever really feels like working toward the wrong direction.
Google doesn't have to maintain the web, they chose to. They also chose to make the web infinitely more complicated so that others are less likely to "compete" for that responsibility. You don't get to insert yourself into that position and then only reap the benefits without putting int the required effort.
Disclaimer: I work on Chrome and I have contributed a (very) small number of fixes to libxml2/libxslt for some of the recent security bugs.
Speaking from personal experience, working on libxslt... not easy for many reasons beyond the complexity of XSLT itself. For instance:
- libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.
- libxslt reaches into libxml and reuses fields in creative ways, e.g. libxml2's `xmlDoc` has a `compression` field that is ostensibly for storing the zlib compression level [1], but libxslt has co-opted it for a completely different purpose [2].
- There's a lot of missing institutional knowledge and no clear place to go for answers, e.g. what does a compile-time flag that guards "refactored parts of libxslt" [3] do exactly?
Sounds like libxslt needs more than just a small number of fixes, and it sounds like Google could be paying someone, like you, to help provide the necessary guidance and feedback to increase the usability and capabilities of the library and evolve it for the better.
Instead Google and others just use it, and expect that any issues that come up to be immediately fixed by the one or two open source maintainers that happen to work on it in their spare time. The power imbalance must not be lost on you here...
If you wanted to dive into what [3] does, you could do so, you could then document it, or refactor it so that it is more obvious, or remove the compile time flag entirely. There is institutional knowledge everywhere...
or, the downstream users who use it and benefit directly from it could step up, but websites and their users are extremely good at expecting things to just magically keep working especially if they don't pay for it. it was free, so it should be free forever, and someone set it up many moons ago, so it should keep working for many more magically!
// of course we know that, as end-users became the product, Big Tech [sic?] started making sure that users remain dumb.
Browser vendors aren't maintaining the web for fee, they are for profit corporations that have chosen to take on that role for the benefits it provides to them. It's only fair that we demand that they also respect the responsibilities that come with it. And we can also point out the hollowness about complaints of hardship due to having to maintain the web's legacy when they keep making it harder for independent browser developers by adding tons on new complexity.
Sure, of course, but unless funding is coming from users the economics won't change, because:
The vendors cite an aspect of said responsibility (security!) to get rid of an other aspect (costly maintenance of a low-revenue feature).
The web is evolving, there's a ton of things that developers (and website product people, and end-users) want. Of course it comes with a lot of "frivolous" innovation, but that's part of finding the right abstractions/APIs.
(And just to make it clear, I think it's terrible for the web and vendors that ~100% of the funding comes from a shady oligopoly that makes money by selling users - but IMHO this doesn't invalidate the aforementioned resource allocation trade off.)
> libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.
I’m having trouble expressing this in a way that won’t likely sound harsher than I really want, but, uh, yes? That’s the fundamental difference between maintaining a part of the commons that anybody can benefit from and a subdirectory in a monorepo. The bazaar incurs coordination costs, and not being able to go and fix all the callers is one of them.
(As best as I can see, Chrome’s approach is largely to make everything a part of the monorepo, so maintaining a part of the commons may not be high on the list of priorities.)
This not to defend any particular ABI choice. Too often ABI is left to luck and essentially just happens instead of being deliberately designed, and too often in those cases we get unlucky. (I’m tempted to recite an old quote[1] about file formats, which are only a bit more sticky than public ABI, because of how well it communicates the amount of seriousness the subject ought to evoke: “Do you, Programmer, take this Object to be part of the persistent state of your application, to have and to hold, through maintenance and iterations, for past and future versions, as long as the application shall live?”)
I’m not even deliberately singling out what seems to me like the weakest of the examples in your list. It’s just that ABI, to me, is such a fundamental part of lib-anything that raising it as an objection against fixing libxslt or libxml2 specifically feels utterly bizarre.
It's one thing if the library was proactively written with ABI compatibility in mind. It's another thing entirely if the library happens to expose all its implementation details in the headers, making it that much harder to change things.
When i first encountered the early GNOME 1 software back in the very late 1990s, and DV (libml author) was active, i was very surprised when i asked for the public API for a library and was told, look at the header files and the source.
They simply didn’t seem to have a concept of data hiding and encapsulation, or worse, felt it led to evil nasty proprietary hidden code and were better than that.
They were all really nice people, mind you—i met quite a few of them, still know some—and the GNOME project has grown up a lot, but i think that’s where libxml was coming from. Daniel didn’t really expect it to be quite so widely used, though, i’m sure.
I’ve actually considered stepping up to maintain libxslt, but i don’t know enough about building on Windows and don’t have access to non-Linux systems really. Remote access will only go so far on Windows i think, although it’d be OK on Mac.
It might be better to move to one of the Rust XML stacks that are under active development (one more active than the other).
Former Mozilla and Google (Chrome team specifically) dev here. The way I see what you're saying is:
Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT from the web platform, regardless of whether it's still being used. It's okay because someone from Mozilla brought it up.
Out of those three projects, two are notoriously under-resourced, and one is notorious for constantly ramming through new features at a pace the other two projects can't or won't keep up with.
Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?
This appeal to authority doesn't hold water for me because the important question is not 'do people with specific priorities think this is a good idea' but instead 'will this idea negatively impact the web platform and its billions of users'. Out of those billions of users it's quite possible a sizable number of them rely on XSLT, and in my reading around this issue I haven't seen concrete data supporting that nobody uses XSLT. If nobody really used it there wouldn't be a need for that polyfill.
Fundamentally the question that should be asked here is: Billions of people use the web every day, which means they're relying on technologies like HTML, CSS, XML, XSLT, etc. Are we okay with breaking something that 0.1% of users rely on? If we are, okay, but who's going to tell that 0.1% of a billion people that they don't matter?
The argument I've seen made is that Google doesn't have the resources (somehow) to maintain XSLT support. One of the googlers argued that new emerging web APIs are more popular, and thus more deserving of resources. So what we've created is a zero-sum game where any new feature added to the platform requires the removal of an existing feature. Where does that game end? Will we eventually remove ARIA and/or screen reader support because it's not used by enough people?
I think all three browser vendors have a duty to their users to support them to the best of their ability, and Google has the financial and human resources to support users of XSLT and is choosing not to.
That same argument applies to numerous web technologies, though.
Applied to each individually it seems to make sense. However the aggregate effect is kill off a substantial portion of the web.
In fact, it's an argument to never add a new web technology: Should 100% of web users be made vulnerable to bugs in a new technology that 0% of the people are currently using?
Plus it's a false dichotomy. They could instead address XSLT security... e.g., as various people have suggested, by building in the XSLT polyfill they are suggesting all the XSLT pages start using as an alternative.
The vulnerabilities associated with native client-side XSLT are not in the language itself (XSLT 1.0) but instead are caused by bugs in the browser implementations.
Ps. The XSLT language is actively maintained and is used in many applications and contexts outside of the browser.
If this is the reason to remove and or not add something to the web, then we should take a good hard look at things like WebSerial/WebBluetooth/WebGPU/Canvas/WebMIDI and other stuff that has been added that is used by a very small percentage of people yet all could contain various security bugs...
If the goal is to reduce security bugs, then we should stop introducing niche features that only make sense when you are trying to have the browser replace the whole OS.
whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.
I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.
For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.
Even more so if you want to support people downloading the data and viewing it from a local file.
If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.
I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.
Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.
In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.
Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.
The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.
Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.
It could be. The meaningful argument is over whether the javascript polyfill should be built into the browser (in which case, browser support remains the same as it ever was, they just swap out a fast but insecure implementation for a slow but secure one), or whether site operators, principally podcast hosts, should be required to integrate it into their sites and serve it.
The first strategy is obviously correct, but Google wants strategy 2.
As discussed in the GitHub thread, strategy two is fundamentally flawed because there’s no other way to make an XML document human readable in today’s browsers. (CSS is close but lacking some capabilities)
So site operators who rely on this feature today are not merely asked to load a polyfill but to fundamentally change the structure of their website - without necessarily getting to the same result in the end.
So the Safari developers are overworked/under-resourced, but Google somehow should have infinite resources to maintain things forever? Apple is a much bigger company than Google these days, so why shouldn't they also have these infinite resources? Oh, right, its because fundamentally they don't value their web browser as much as they should. But you give them a pass.
Many such cases. Remember when the Chrome team seriously thought they could just disable JavaScript alert() overnight [1][2] and not break decades of internet compatibility? It still makes me smile how quietly this was swept under the rug once it crashed and burned, just like how the countless "off-topic" and "too emotional" comments on Github said it would.
Glad to see the disdain for the actual users of their software remains.
Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)
Even in its heyday, XSLT seemed like an afterthought. Probably there are a handful of legacy corporate users hanging on to it for dear life. But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...
> But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...
Flash was not part of the web platform. It was a plugin, a plugin that was, over time, abandoned by its maker.
FTP was not part of the web platform. It was a separate protocol that some browsers just happened to include a handler for. If you have an FTP client, you can still open FTP links just fine.
Non-HTTPS sites are being discouraged, but still work fine, and can reasonably be expected to continue to work indefinitely, though they are likely to be discouraged a bit harder over time.
XSLT is part of the web platform. And removing it breaks various things.
XSLT was awesome back in the day. You could get a block of XML data from the server, and with a bit of very simple scripting, slice it, filter it, sort it, present summary or detail views, generate tables or forms, all without a server round trip. This was back in IE6 days, or even IE5 with an add-on.
We built stuff with it that amazed users, because they were so used to the "full page reload" for every change.
This came up in some of the comments: https://github.com/whatwg/html/issues/11523#issuecomment-315...
if you click the links instead of copy/pasting into your reader you get a page full of raw XML. It's not harmful or anything but it's not a great look. You can't really expect your users to just never click on your links, that's usually what links are for.
+1. I worked on an internal corporate eCommerce in 2005 built entirely on DOM + XSLT to create the final HTML. It was an atrocious pain in the neck to maintain (despite being server side so the browser never had to deal with the XSLT).
Unless you still manipulate XML and need to transform it in various other formats through XSLT/XSL-FO, I don’t see why anyone would bother with it.
It always cracks me up when people « demand » support for features hardly ever used for which they won’t spend a dime or a minute to help
When I see "reps from every browser agree" my bullshit alarm immediately goes off. Does it include unanimous support from browser projects that are either:
1. not trillion dollar tech companies
or
2. not 99% funded from a trillion dollar tech company.
I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare. And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
Well, every browser engine that is part of WHATWG. That's how working groups... work. The current crop of "not Chrome/Firefox/Webkit" aren't typically building their own browser engines though. They're re-skinning Chromium/Gecko/Webkit.
It's not a huge conspiracy, but it is worthwhile to consider what the incentives are for people from each browser vendor. In practice all the vendors probably have big backlogs of work they are struggling to keep up with. The backlogs are accumulating in part because of the breakneck pace at which new APIs and features are added to the web platform, and in part because of the unending torrent of new security vulnerabilities being discovered in existing parts of the platform. Anything that reduces the backlog is thus really appealing, and money doesn't have to change hands.
Arguably, we could lighten the load on all three teams (especially the under-resourced Firefox and Safari teams) by slowing the pace of new APIs and platform features. This would also ease development of browsers by new teams, like Servo or Ladybird. But this seems to be an unpopular stance because people really (for good reason) want the web platform to have every pet feature they're an advocate for. Most people don't have the perspective necessary to see why a slower pace may be necessary.
>I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare.
This has never ever made sense because Mozilla is not at all afraid to piss in Google's cheerios at the standards meetings. How many different variations of Flock and similar adtech oriented features did they shoot down? It's gotta be at least 3. Not to mention the anti-fingerprinting tech that's available in Firefox (not by default because it breaks several websites) and opposition to several Google-proposed APIs on grounds of fingerprinting. And keeping Manifest V2 around indefinitely for the adblockers.
People just want a conspiracy, even when no observed evidence actually supports it.
>And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
That's basically true whether incidentally or on purpose.
Controlled opposition is absolutely a thing, and to think that people at trillion dollar companies wouldn't do this is naive. I'm not claiming for a fact that mozilla is controlled opposition, i'm just saying it's very feasible that it could be, and i look for signs of it.
You give examples of things they disagree on, and i wouldn't refute that. However i would say that google is going to pick and choose their battles, because ultimately things they appear to "lose on" sort of don't matter. fingerprinting is a great example - yes, firefox provides it, but it's still largely pretty useless, and its impact is even more meaningless because so few people use it. if you have javascript on and arent using a VPN, chances are your anti-fingerprinting isn't actually doing much other than annoying you and breaking sites.
the only real thing to be used for near-complete-anonymity is Tor, but only when it's also used in the right way, and when JavaScript is also turned off. And even then there are ways it could and probably has failed.
> Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT
Did anybody bother checking with Microsoft? XML/XSLT is very enterprisey and this will likely break a lot of intranet (or $$$ commercial) applications.
Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy? It's the equivalent of the crazy cat hoarder who wormed her way onto the HOA board speaking for everyone else. No.
There are countries like Germany where Firefox still has around 10% market share [0], or closer to 20% on the desktop, only second behind Chrome [1]. Not exactly irrelevant.
> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?
The juxtaposition of these two statements is very funny.
Firefox actually develops a browser, Microsoft doesn't. That's why Firefox gets a say and Microsoft doesn't. Microsoft jumped off the browser game years ago.
No, changing the search engine from Google to Bing in chromium doesn't count.
Ultimately, Microsoft isn't implementing jack shit around XSLT because they aren't implementing ANY web standards.
You make it sound like those two thoughts are incompatible in juxtaposition, but they are in fact perfectly consistent, even if you were correct that Microsoft isn't building anything, as the premise is that users matter more than elbow grease. The reason why you'd want to ask Microsoft is the same reason why you might not bother consulting Firefox: because Microsoft has actual users they represent, and Firefox does not.
"Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?"
There was not really a vote in the first place and FF is still dependant on google. Otherwise FF (users) represants a vocal and somewhat influental minority, capable of creating shitstorms, if the pain level is high enough.
Personally, I always thought XSLT is somewhat weird, so I never used it. Good choice in hindsight.
So Microsoft cucked by Google and Mozilla being a puppet regime of Google at this point.
Seems like a rigged game to me.
Yes it's a wrapper but Microsoft represents a completely different market with individual needs/wants.
If it wasn't for Apple (who doesn't care about enterprise) butting in, the browser consortium would be reminiscent of the old Soviet Union in terms of voting.
>who's going to tell that 0.1% of a billion people that they don't matter?
This is also not a fair framing. There are lots of good reasons to deprecate a technology, and it doesn't mean the users don't matter. As always, technology requires tradeoffs (as does the "common good", usually.)
> Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?
Because otherwise everybody has to repeat same work again and again, programming how - instead of focusing on what, declarative way.
Then data is not free, but caged by processing so it can't exist without it.
I just want data or information - not processing, not strings attached.
I don't see any need to run any extra code over any information - except to keep control and to attach other code, trackers etc. - just, I'm not Google, no need to push anything (just.. faster JS engine instead of empowering users somehow made a browser better ? (no matter how fast, you can't) - for what ? (of what I needed) - or instead of something, that they 'forgot' with a wish they could erase it ?)
0.02% of public Web pages, apparently, have the XSLT processing instruction in them, and a few more invoke XSLT through JavaScript (no-one really knows how many right now).
It’s likely more heavily used inside corporate and governmental firewalls, but that’s much harder to measure.
By your argument, once anything makes it in, then it can't be removed. Billions of people are going to use the web every day and it won't stop. Even the most obscure feature will end up being used by 0.1% of users. Can you name a feature that's supported by all browsers that's not being used by anyone?
Yes. That is exactly how web standards work historically. If something will break 0.1% of the web it isn't done unless there are really really strong reasons to do it anyway. I personally watched lots of things get bounced due to their impact on a very small % of all websites.
This is part of why web standards processes need to be very conservative about what's added to the web, and part of why a small vocal contingent of web people are angry that Google keeps adding all sorts of weird stuff to the platform. Useful weird stuff, but regardless.
3. It seems there are plenty of examples of features being removed above that threshold NPAPI/SPDY/WebSQL/etc.
4. Resources are finite. It’s not a simple matter of who would be impacted. It’s also opportunity cost and people who could be helped as resources are applied to other efforts.
As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!
--- end quote ---
Read the full doc. They even give examples when they couldn't remove a feature impacting just 0.0000008% of web views.
Also, according to Chrome's telemetry, very, very few websites are using it in practice. It's not like the proposal is threatening to make some significant portion of the web inaccessible. At least we can see the data underlying the proposal here.
I'm curious as to the scope of the problem, if html spec drops xslt, what the solutions would be; I've never really used xslt (once maybe, 20 years ago). In addition to just pre-rendering your webpage server-side, I assume another possible solution is some javascript library that does the transformations, if it needed to be client-side?
Looking at the problem differently. Say some change would make Hacker News unusable, the data would support this and show that it practically affects no one.
First, we are an insignificant portion of the web, and it's okay to admit that.
Second, if HN were built upon outdated Web standards practically nobody else uses, I'm sure YCombinator could address the issue before the deadline (which would probably be at least a year or two out) to meet the needs of its community. Every plant needs nourishment to survive.
First, you're assuming that those portions of the Web won't evolve in order to survive. Second, you're ascribing a motive to Google that you assume (probably falsely) that they possess.
As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial.
There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!
--- end quote ---
3. Any feature removal on the web has to be a) given thorough thought and investigation which we haven't seen. Library of congress apparently uses XSLT and Chrome devs couldn't care less
>Chrome telemetry underreports a lot of use cases
Sure; in that case, I would suggest to the people with those use cases that they should stop switching off telemetry. Everyone on HN seems to forget telemetry isn't there for shits and giggles, it's there to help improve a product. If you refuse to help improve the product, don't expect a company to improve the product for you, for free.
This was mentioned in the discussions and are an easy search away. Which means that googlers in their arrogance didn't do any research at all and that their counter underrepresents data as explicitly stated in their own document
And yet it's not the Googlers and other browser implementers who didn't do even a modicum or research who are arrogant, but me, because I made a potential mistake quickly searching for something on my phone at night?
As for any standard, the implementers ultimately own it. Users don't spend resources on implementing standards, so they only get a marginal say. Do you expect to contribute to the 6G standards, or USB-C, too?
Notice how "unilateral support by browser vendors" didn't even look at actual usage of XSLT, where it's used, and whether significant parts would be affected.
So if in reading the two threads correctly essentially Google asked for feedback, essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?
The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.
And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."
And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.
Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.
>essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?
this is a perfectly reasonable course of action if the feedback is "please don't" but the people saying "please don't" aren't people who are actually using it or who can explain why it's necessary. it's a request for feedback, not just a poll.
> I'd presume that most of those people are using it in some capacity, it's just that their numbers are seen as too minor to influence the decision.
I think the idea of that is reasonable. If I used XSLT on my tiny, low-traffic blog, I think it's reasonable for browser devs to tell me to update my code. Even if 100 people like me said the same thing, that's still a vanishingly small portion of the web, a rounding error, protesting it.
I'd expect the protests to be disproportionate in number and loudness because the billion webmasters who couldn't care less aren't weighing in on it.
Now, I'm not saying this with a strong opinion on this specific proposal. It doesn't affect me either way. It's more about the general principle that a loud number of small webmasters opposing the move doesn't mean it's not a good idea. Like, people loudly argued about removing <marquee> back in the day, but that happened to be a great idea.
True, a small number of vocal opponents does not automatically make something a bad idea. But in these cases of compatibility, especially with something as big as the Web, the vast majority of those affected who do care will be completely silent. There's no hotline to call up the entire world and tell them to update their code.
(And if you did want to tell the entire world to update their code, and have any chance of them following through with it, you'd better make sure there's an immediate replacement ready. Log4Shell would probably still be a huge issue today if it couldn't be fixed in place by swapping out jar files.)
> If I used XSLT on my tiny, low-traffic blog, I think it's reasonable for browser devs to tell me to update my code.
I _do_ use XSLT on my tiny, low-traffic blog, and I _don't_ think that it's reasonable for browser devs to tell me to update my code.
Also, it's real easy to manufacture a situation where adoption of a thing is low when the implementation is incomplete and hasn't had significant updates for decades.
The web has grown a thousand fold over those decades, in spite of no support for XSLT. No browser has failed (or gained market traction) by missing support for (or adding more support for) XSLT. It's an irrelevancy, even if you did like it once.
Lots of content was lost when Flash was removed as well - much, much more than the amount of content that will be lost if XSLT is removed. And yet the web continued.
The web is straight up a weaker, worse more closed off experience post-flash, so I'm not sure that this engenders the kind of response you had envisioned but now I'm worried about xslt.
They didn't even do the tiniest bit of research, as people in the discussions clearly showed, and there are high impact sites that would be affected by this including Congress and Library of Congress: https://news.ycombinator.com/item?id=44958929
It would be incredible if we could pull it into the javascript/wasm sandbox and get xslt 3.0 support. The best of both worlds, at the cost of a performance hit on those pages, but not a terrible cost.
It comes with the XML territory that things have versioned schemas and things like namespaces, and can be programmed in XSLT. This typically means that integrations are trivial due to public, reliable contracts.
Unlike your average Angular project. Building on top of minified Typescript is rather unreasonable and integrating with JSON means you have a less than reliable data transfer protocol without schema, so validation is a crude trial and error process.
There's no elegance in raw XML et consortes, but the maturity of this family means there are also very mature tools so in practice you don't have to look at XML or XSD as text, you can just unmarshal it into your programming language of choice (that is, if you choose a suitable one) and look at it as you would some other data structure.
Thread was already locked due vitriol, insults, and general ranting before I had the chance to comment to say I felt it was a good idea. Also, "this is a good idea" is not really the sort of things people tend to comment, so it will always be biased towards people who disagree. What "all feedback" said on that thread is basically meaningless – it's not a vote.
The fastest way to be dismissed is to be a dick. People were massive dicks so they were dismissed. You all made your bed, so now you have lie in it.
> > ChromeOS is for all practical purposes, the web
I'm very practically using Debian Linux on ChromeOS to develop test and debug enterprise software. I even compile and run some native code. It is very much more than just the web.
Breaking the fundamental promise of the HTML spec is a big deal.
The discussions don't address that. That surprises me, because these seem to be the people in charge of the spec.
The promise is, "This is HTML. Count on it."
Now it would be just, "This is HTML for now. Don't count on it staying that way, though."
Not saying it should never be done, but it's a big deal.
They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.
So what they're really proposing is to cut off the web's long tail.
(Just want to note: The list of long-tail web technologies will continue to grow over time... we can expect it to grow roughly in proportion to the rate at which web technologies were added around 20 years in the past. Meaning we can expect an explosion of long-tail web technologies soon enough. We might want to think carefully about whether the people currently running the web value the web's long tail the way we would like.)
Nothing lasts forever, and eventually you have to port, emulate, archive or otherwise deal with very old applications / media. You see this all over the place: physical media, file formats, protocols, retro gaming, etc.
There's a sweet spot between giving people enough time and tools to make a transition while also avoiding having your platform implode into a black hole of accumulated complexity. Neither end of the spectrum is healthy.
WHATWG broke this quasi-officially when they declared HTML a "Living Standard". The HTML spec is not a standard to be implemented anymore, it's just a method of coordinating/announcing what the browser vendors are currently working on.
(For the same reason, they dropped the name HTML5 and are only talking about "HTML". Who needs version numbers if there is no future and no past anyway?)
To be completely fair, looking over the lines removed by the PR, there don't appear to be any normative statements requiring HTML handling XSLT unless I missed one.
I get that people are more reacting to the prospect of browsers removing existing support, but I was pretty surprised by how short the PR was. I assumed it was more intertwined.
Their explicit intent is to generally remove XSLT from browsers.
If this was just about, e.g., organizing web standards docs for better separation of concerns, I think a lot of people would be reacting to it quite differently.
> They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.
That's a concise way to put it. IMHO this is also the main problem of the standard.
However I think XSLT isn't only long tail but also a curiosity with just academic value. I've being doing some experimentation and prototyping with XSLT while it was still considered alive. So even if you see some value in it, the problems are endless:
* XSLT is cumbersome to write and read
* XML is clunky, XSLT even more so
* yes there's SLAX, which is okay-ish but it becomes clear very fast that it's indeed just Syntax sugar
* there's XSLT 2.0 but there's no software support
* nobody uses it, there's no network effect in usage
I think a few years ago I stumbled upon a CMS that uses it and once I accidentally stumbled upon a Website that uses XSLT transformation for styling. That's all XSLT I ever saw in the wild being actually used.
All in all XSLT is a useless part of the way to large long tail preventing virtually everyone from writing spec compliant web browser engines.
> The promise is, "This is HTML. Count on it."
I think after HTML4 and XHTML people saw that a fully rigid standard isn't viable, so they made HTML5 a living standard with a plethora of working groups. Therefore the times where this was ever supposed to be true are long over anyway.
So indeed the correct way forward would be to remove more parts of a long tail that's hardly in use and stopping innovation. And instead maybe keeping a short list of features that allow writing modern websites.
(Also nobody is stopping anyone from using XSLT as primary language that compiles to HTML5/ES5/CSS)
There's a perverse irony that Google is as responsible as anybody for cramming a crazy amount of new stuff into the HTML/CSS/browser spec that everybody else has to support forever.
If they were one of the voices for "the browser should be lightweight and let JS libs handle the weird stuff" I would respect this action, but Google is very very not that.
Probably labeling a removal of a format (which is somewhat niche anyway) as "killing the open web" was a bit hyperbolic and not entirely warranted in this case.
Imagine that tomorrow, Google announces plans to stop supporting HTML and move everyone to its own version of "CompuServe", delivered only via Google Fiber and accessible only with Google Chrome. What headline would you suggest for that occasion? "Google is killing the open web" has already been used today on an article about upcoming deprecation of XSLT format.
Hard to find these days, but it remindes me of this [0]:
> "- Google had a plan called "Project NERA" to turn the web into a walled garden they called "Not Owned But Operated". A core component of this was the forced logins to the chrome browser you've probably experienced (surprise!)"
To "not own but operate" seems to go into the direction of the parent comment.
Right. If anything, it's the opposite: removing XSLT reduces the
complexity of existing browsers, allowing new ones to catch up faster.
To me it seems that some people just really like using XSLT, and don't
want it gone. Which is fair, but it also has nothing to do with the
web's openness - yes, Google has far too much power, but XSLT isn't
helping.
> Probably labeling a removal of a format (which is somewhat niche anyway) as "killing the open web" was a bit hyperbolic and not entirely warranted in this case.
Incorrect on three counts. That article lists a bunch of useful technologies that were rejected at WHATWG with unconvincing reasons against massive public protests. It wasn't just labeling the removal of a format - that's a misrepresentation. The second is your characterization of calling XSLT niche. The article makes a case for why it is like that why it shouldn't be so. It's niche because it is neglected by the browser devs themselves. It hasn't been updated to the latest standard in a long time and it isn't maintained well enough to avoid serious bugs. And finally the third - "killing the open web" being a hyperbole. I don't even know where to start. There was a joke that web standards are proposed by someone from Google, reviewed and cleared from someone else from Google and finally approved by someone from Google. We saw this in action with WEI (The only reason for its partial rollback being the unusual attention and the massive backlash they faced from the wider tech community and mainstream media - including ours). At this point the public discussion there is just a farce. I don't know how many times this keeps repeating. That article shows many examples of this. Let me add my own recollections of the mockery to the mix - inclusion of EME and the rejection of JPEG-XL (technically not a part of the standard, but it is in a manner of speaking). It doesn't even resemble anything open.
I will be surprised if this comment doesn't receive a ton of negative votes. But there is no point in being a professional and in being here, if I'm unwilling to oppose this in public interest. The general conduct of WHATWG antithetical to public interest and are meant to escape the attention of the non-tech public. And even the voice of the savvy public is ignored repeatedly and contemptuously. It's not difficult to identify the corruptive influences of private commercial interests on these standards - EME and WEI being the tip of the iceberg. And let's not ignore the elephant in the room. It getting harder by the day to use a browser (web engine to be more precise) of your choice. In this context, the removal of XSLT isn't just a unilateral decision (please don't quote Firefox, Safari or Edge. Their interdependence is nothing short of a cabal at this point), its justification is based on problems that they themselves created.
Again expecting to be downvoted, it's hard to miss the patterns - arguments against XSLT that ignore the neglect that lead to it, and the dismissal of public comments (then why discuss it where anyone can read and post? why bill it as open?). The same happened with SMIL, JPEG XL,... It's tense to suggest attempts to drown out the opposition (I know it has a name. But that's enough trigger some), even if there are sufficient reasons to suspect it. But the flagging of that other article is a blatant indicator of that. Nothing in that article is factually false or remotely hyperbolic. Many of us are first hand witnesses of the damages and concerns it raises. The article is a good quality aggregation of the relevant history. Who is so inconvenienced by that? The only reason I can think of is the zeal to censor public interest opinions. Is the hubris in the group issue tracker spreading to public tech fora now? Conduct like this makes me lose hope that the web platform will ever be the harbinger of humanity's progress that it once promised to be. Instead it's turning out to be another slow motion casualty of unbridled greed.
PS: The flag has been cleared by the admins. But their (!admin) intent is unmistakable.
These things land differently with different readers, of course, but "Google is killing the open web" does seem pretty baity to me. The combo of grand-claim and something-to-get-mad-about usually is. It doesn't take too large a set of provoked readers to get a large enough set of provoked commenters to bump a thread into flamewar mode.
Please take this as a point of discussion rather than as an argument. That title is something that I and many others would have come up with on our own without needing any provocation. In fact, the exact same thing has been said numerous times independently all over the net. There are so many instances that justify the assertion that you could make a very long list with the relevant HN stories alone. But that isn't the point of this reply.
The way I see it, any general or sweeping accusation against an entity may be construed as clickbait or too provocative for HN, even if the content backs it up sufficiently. But at what point are you going to draw the line where you consider the accusations to be credible enough to warrant such a scathing crticism? It's not as if these entities are renowned for their ethical conduct or even basic decency regarding the commons. Heated public lash back is often the only avenue they leave us. Case in point, I hope you remember the stand that the HN crowd took against WEI. Make no mistake, such discussions here don't go unnoticed. The talking points here often influence the public discourse, including by mass media. That's why there is such a fierce fight to control the narrative here.
I respect your right to your opinion. But this is essentially a political subject. And there is no getting around the fact that you cannot divorce politics from technology, or from any relevant subject for that matter. If that's considered as flame war, then I guess flame wars are an unavoidable and normal part technical discourse. It isn't personal (and no personal attacks should be involved), but the stakes are high enough for the contestants (often of high monetary nature). Attempts to curb such heated discourse will result in two serious consequences. The first is that you will give one or often both sides (ironically), the impression that HN is a place to amplify certain narratives without a balanced take. Secondly, you'll unintentionally and indirectly influence the outcome outside of HN. From my perspective, that leaves you in an unenviable predicament of such serious decisions.
So I implore you to consider these matters as well while taking such decisions. Especially to ensure that your personal biases don't influence what you consider as click and flame baits. From my personal experience, I know that you're putting in the utmost care, diligence and sincerity in those matters. But it's possible that the pressure to avoid controversies, fights and bad blood might have shifted your Overton window too far into the cautious territory over time. Probably a good yard stick is to see if the flamewar is important enough and whether it avoids personal harm (physical and emotional). I hope you'll consider this opinion when you make similar determination in the future. Regards!
When I go through these points I don't think we're disagreeing much! It seems more of a difference in style. For examples:
HN doesn't lack for criticism of the tech BigCos. If it's true that HN influences the public discourse (which I doubt, but let's assume it does), all that influence was gained by being the same HN with the same bookish* titles and preference to avoid flamewars as we're talking about here.
Yes, many people have the impression that HN is biased, pushing one point of view over another, etc. But people will have that impression regardless. It's in the eye of the beholder, and there are many angry beholders, so we get accused of every bias you can think of. This is baked into the fundamentals of the site.
I don't think moderators' personal tastes are all that intertwined with issues like baity titles. For example, I like Lisp but if someone posted "Lisp crushes its enemies into execrable dust", I'd still edit that title to "Lisp macros provide a high degree of expressiveness" or some representative sentence from the article.
* pg's word about how he wanted the HN frontpage to be
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken.
Honestly, the guidelines must also include a clause prohibiting those activities. Sometimes the pattern is overwhelming. But it's prohibited to complain about it. Not an ideal situation. Hope you'll give it a serious thought.
Those activities are certainly prohibited. I don't think we don't need a guideline to say that though.
The HN guidelines don't list everything that's prohibited. To publish such a list would be to imply that everything not on the list is ok. That would be a big mistake! It would be carte blanche to the entire internet to find loopholes and wreak havoc with them.
> Sometimes the pattern is overwhelming.
The trouble is that in many cases it feels like such a pattern—and the feeling can be super convincing—yet there turns out to be no evidence for it. Perceptions are awfully unreliable about this.
We ask people not to post about these things in the threads, not to imply that actual astroturfing etc. is at all ok, but because unfounded comments about it vastly outnumber well-founded comments. Worse, they have a way of proliferating and taking over the threads.
Keep in mind that that guideline doesn't say "please don't post and then do nothing". It says "please don't post, but do email us so we can look into it". We do look into it, and on occasions when we find evidence, we act on it. There just needs to be something objective to go on, and in most cases there isn't.
The phenomenon of internet users being far too quick to jump to conclusions about astroturfing, bots, etc., is extremely well established. If there's one phenomenon we've learned about decisively over the years, that's the one. (Well, one of two.)
Companies use the same tactics as some states, bot campaigns, etc. The aim is to suppress, or at least diminish, the voices of opposition.
The flagged post is a perfect example. It contains just a fraction of factual information, but it was enough for bot farms to engage. Manipulators get mad at truth.
This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.
The browsers today are too bloated and it is difficult to create a new browser engine. I wish there were simpler standards for "minimal browser", for example, supporting only basic HTML tags, basic layout rules, WASM and Java bytecode.
Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.
Jinja operates on text, so it's basically document.write(). XSLT works on the nodes itself. That's better.
> Also it can be reimplemented using JS or WASM.
Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.
I think if you view XSLT as nothing more than ancient technology that nobody uses, then I can see how you could think this is ok, but I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.
I bet Google will try and solve this problem they're creating by pushing AMP again...
> The browsers today are too bloated
No, Google's browser today is too bloated: That's nobody's fault but Google.
> and it is difficult to create a new browser engine
I don't recommend confusing difficult to create with difficult to sell unless you're looking for a reason to not do something: There's usually very little overlap between the two in the solution.
I'm asking this genuinely, not as a leading question or a gotcha trap: why use this client side, instead of running it on the server and sending the rendered output?
For one, in many cases the XML + XSLT is more compact than the rendered output, so there are hosting and bandwidth benefits, especially if you're transforming a lot of XML files with the same XSLT.
Imagine 1000 numbers in XML and a XSLT with xsl:for-each which renders a div with a label, textbox with the number and maybe a button.
That's a simple example. Output would be a lot longer than XML+XSLT.
I think the obvious answer is that client side mapping would let the browser give different view of the data to the client. The obvious problem is that downloading all the data and then transforming is inherently inefficient (and sure, despite this, download-then-process is a common solution used for many problems - but it's problematic to specify the worst solution before you know the problem).
Perhaps there's an alternative universe where javascript lost and an elegant, declarative XSLT could declaratively present data and incrementally download only what's needed, allowing compact and elegant websites.
But in our universe today, this mapping language wound-up a half-thought-idea that just kicked around for a long time in the specs without ever making sense.
My gut instinct is to agree with every bit of that. I admit that I might be missing something, but I've never wanted to send the data once and then have the client view it in multiple transformed ways (minus simple presentation stuff like sorting a table by column and things like that).
And using it to generate RSS as mentioned elsewhere in the comments? That makes perfect sense to me on the server. I don't know that I've ever even seen client-side generated RSS.
But again, this may all be my own lack of imagination.
Maybe but PR author, who created the Issue there as well, gave example: 'JSON+React'. 'React' one of the slowest framework out there. Performance is rarely considered in contemporary front-end.
Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high. That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.
> Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high
XSLT is XML: It can be served with the XML as a single request.
You don't have any idea what you're talking about.
> That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.
Google.com used to be about a kilobyte. Now it's 100kb. I think it's absolutely clear Google either doesn't have the first idea how to make things fast, or doesn't care.
That assumes the server has a lot of additional CPU power to serve the content as HTML (and thus do the templating server side), whereas with XSLT I can serve XML and the XSLT and the client side can render the page according to the XSLT.
The XSLT can also be served once, and then cached for a very long time period, and the XML can be very small.
With server-side rendering you control the amount of compute you are providing, with client-side rendering you cannot control anything and if the app would be dog slow on some devices you can't do anything.
> Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.
Nobody is going to process million of DOM nodes with XSLT because the browser won't be able to display them anyway. And one can write a WASM implementation.
You're right nobody processes a million DOM nodes with XSLT in a browser, but you're wrong about everything else: WASM has a huge startup cost.
Consider applying stylesheet properties: XSLT knows exactly how to lay things out so it can put all of the stylesheet properties directly on the element. Pre-rendered HTML would be huge. CSS is slow. XSLT gets you direct-attach, small-payload, and low-latency display.
That's even a rarer case, embedding CSS rules into XSLT template (if I understood you correctly), I never heard of it. I know that CSS is sometimes embedded into HTML though.
> Why should the browser contain a specific template engine, like XSLT,
XSLT is a templating language (like HTML is a content language), not a template engine like Blink or WebKit is a browser engine.
> Also it can be reimplemented using JS or WASM.
Changing the implementation wouldn't involve taking the language out of the web platform. There wouldn't need to be any standardization talk about changing the implementation used in one or more browsers.
The old, bug-ridden native XSLT code could also be shipped as WASM along with the browser rather than being deprecated. The sandbox would nullify the exploits, and avoid breaking old sites.
They actually thought about it, and decided not to do it :-/
> Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.
Audio and canvas are fundamental I/O things. You can’t shift them to WASM.
You could theoretically shift a fair bit of Audio into a WASM blob, just expose something more like Mozilla’s original Audio Data API which the Web Audio API defeated for some reason, and implement the rest atop that single primitive.
2D canvas context includes some rendering stuff that needs to match DOM rendering. So you can’t even just expose pixel data and implement the rest of the 2D context in a WASM blob atop that.
And shifting as much of 2D context to WASM as you could would destroy its performance. As for WebGL and WebGPU contexts, their whole thing is GPU integration, you can’t do that via WASM.
So overall, these things you’re saying could be done in WASM are the primitives, so they definitely can’t.
The browser could use Java or .NET bytecode interpreter - in this case it doesn't need to have a compiler and you can use any language - but in this case you won't be able to see a script's source code.
You already effectively can't see a scripts source code because we compile, minify, and obfuscate JS these days. Because the performance characteristics are so poor.
Actually, most of the time C# decompiles nicer from CLR bytecode than esoterically built JS.
It's a consequence of javascript being "good enough." Originally, the goal was for the web to support multiple languages (I think one prototype of the <script> tag had a "type=text/tcl") and IE supported VBScript for a while.
But at the end of the day, you only really need one, and the type attribute was phased out of the script tag entirely, and Javascript won.
> Why should the browser contain a specific template engine, like XSLT, and not Jinja for example?
Historic reasons, and it sounds like they want it to contain zero template engines. You could transpile a subset of Jinja or Mustache to XSLT, but no one seems to do it or care.
Adding XSLT support is as absurd as adding React into a browser (especially given that it's change detection is inefficient and requires lot of computation). Instead, browsers should provide better change tracking methods for JS objects.
The downside of knockout was that it used proxies for change tracking, and you had to create those proxies manually, so you cannot have an object with a Number property, you had to have an object with a proxy function as a property.
I kind of agree that little used,[0] non-web-like features is fair to be considered for removal. However I wish they didn't hide behind security vulnerabilities as the reason as that clearly wasn't it. The author didn't even bother to look if a memory safe package existed. "We're removing this for your own good" is the worst way to go about it but he still doubles down on this idea later in the thread.
I get what you're saying, but following this line of reasoning would mean that successful, wide-spread specifications, standards, and technologies must never drop any features. They would only ever accumulate new features, bloating to the point of uselessness, and die under the weight of their own success.
Nonsense. Following this line of reasoning is that putting percentages on billions is intellectually dishonest: You don't have to go any further than that. It is perhaps out of ignorance (now you know), but if you try to make it about anything else, that's just arguing in bad-faith.
Of course you can drop features, but if you work at Google I think you can pick something else, and you'll have a hard time convincing anyone that XSLT which was in Chrome back when it was fast, is why Chrome isn't fast anymore. And if you don't work at Google, why do you care? You've learned something new today. Enjoy.
It's not being dishonest. Software needs to be maintained. And google isn't the only web browser, nor should it be. It makes sense to re-evaluate which features make sense for the web. Flash and Java applets were both removed from web browsers and broke sites for millions of users, probably much more than XSLT would. But it was still the right call. This case is a bit more nuanced than those but I still think it's at least fair to discuss removing it.
Compare webkit to UDK (The unreal development kit for game dev) to consider why there is so much bloat in the browser. People have wanted to render more and more advanced things, and the webkit engine should cater to all of them as best it can.
For better or worse, http is no longer just for serving textual documents.
While this sounds crazy at first, I could warm for several incremental layers of features, where browsers could choose to implement support for only a set of layers. The lowest layer would be something like HTTP with plain text, the next one HTML, then CSS with basic selectors, then CSS with the full selector set, then ECMA and WASM, then device APIs, and so forth.
Would make it possible to create spec-compliant browsers with a subset of the web platform, fulfilling different use cases without ripping out essentials or hacking them in.
There is no point in several layers because to maximize compatibility developers would need to target the simplest layer. And if they don't, simple browsers won't be able to compete with full-fledged ones.
You can set the doctype in the document to the spec you want to use, which is basically what you're asking for. Try setting <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.
I think a dedicated unsupported media type -> supported media type WASM transformation interface would be good. You could use it for new image formats and the like as well. There are things like JXL.js that do this:
I get the point a minimal browser and WASM, but Java bytecode ?! Why not Python bytecode ? Seems unreasonable to me to add any specific bytecode support. By layout rules you mean get rid of CSS ? Sounds also unreasonable IMHO.
And no WebAudio and Canvas couldn't be implemented in client WASM without big security implication. If by module you mean inside the browser, them, what is the point of WASM here ?
What WebAudio needs to provide is only means to get or push buffers from/to audio devices and run code in high priority thread. There is no need for browser to provide implementation of low-pass filters, audio proccessing graphs and similar primitives.
Oh hey, that thing happened that one could easily see was going to happen [0]. The writing was on the wall for XSL as soon as the browsers tore out FTP support: their desire to minimize attack surface trumps any tendency to leave well enough alone.
I wonder what the next step of removing less-popular features will be. Probably the SMIL attributes in favor of CSS for SVG animations, they've been grumbling about those for a while. Or maybe they'll ultimately decide that they don't like native MathML support after all. Really, any functionality that doesn't fit in the mold of "a CSS attribute" or "a JS method" is at risk, including most things XML-related.
CSS animations still lack a semantic way to sequence animations based on the beginning/end of some other animation, which SMIL offers. With SMIL you can say 'when this animation ID begins/ends only then trigger this other animation', including time offsets from that point.
Which is miles better than having to having to use calcs for CSS animation timing which requires a kludge of CSS variables/etc to keep track of when something begins/ends time-wise, if wanting to avoid requiring Javascript. And some years ago Firefox IIRC didn't even support time-based calcs.
When Chromium announced the intent to deprecate SMIL a decade back (before relenting) it was far too early to consider that given CSS at that time lacked much of what SMIL allowed for (including motion along a path and SVG attribute value animations, which saw CSS support later). It also set off a chain of articles and never-again updated notes warning about SMIL, which just added to confusion. I remember even an LLM mistakenly believing SMIL was still deprecated in Chromium.
And there's one of the issues: browser devs are perfectly happy if user JS can be used to replicate some piece of functionality, since then it's not their problem.
There's ways to reduce attack surface short of tearing out support. Such as, for instance, taking one of those alleged JS polyfills and plugging it into the browser, in place of all the C++. But if attack surface is your sole concern, then one of those options sounds much easier than the other, and also ever-so-slightly superior.
In any case, there's no limit on how far one can disregard compatibility in the name of security. Just look at the situation on Apple OSes, where developers are kept on a constant treadmill to update their programs to the latest APIs. I'd rather not have everything trend in that direction, even if it means keeping shims and polyfills that aren't totally necessary for modern users.
It is a balance (compatibility vs attach surfaces). The issue with XSLT (which I am still a strong advocate for) is that nobody is maintaining that code. So vulnerabilities sit there undetected. Like the relatively recent discovery of the xsl:document vulnerability.
> It is a balance (compatibility vs attach surfaces).
What I'm trying to say is that it's a false dichotomy in most cases: implementations could almost eliminate the attack surface while maintaining the same functionality, and without devoting any more ongoing effort. Such as, for instance, JS polyfills, or WASM blobs, which could be subjected to the usual security boundaries no matter how bug-ridden and ill-maintained they are internally.
But removing the functionality is often seen as the more expedient option, and so that's what gets picked.
Sure, but this requires someone sitting down and writing the JS polyfill, and then maintaining it indefinitely. And for something as complicated as XSLT, that will surely be indefinite maintenance, because complicated specs beget complicated implementations.
In the absence of anyone raring to do that, removal seems the more sensible option.
The vendor discussion on removing XSLT is predicated on someone creating a polyfill for users to move to. It is not an unreasonable assumption because a polyfill can be created fairly trivially by compiling the existing XSLT processor to WASM.
And it is also fairly trivial to put that polyfill into the browsers.
The Chrome team has been moaning about XSLT for a decade. If security was really their concern they could have replaced the implementation with asm.js a decade ago, just as they did for pdfs.
> Sure, but this requires [...] maintaining it indefinitely.
Does it, though? Browsers already have existing XSLT stacks, which have somehow gotten by practically unmodified for the last 20 years. The basic XSLT 1.0 functionality never changes, and the links between the XSLT code and the rest of the codebase rarely change, so I find it hard to believe that slapping it into a sandbox would suddenly turn it into a persistent time sink.
Wasn't this whole discussion sparked by a fairly significant bug in the libxslt implementation? There's also a comment from a Chrome developer somewhere in this thread talking about regularly trying to fix things in libxslt, and how difficult that was because of how the library is structured.
So it is currently a persistent time sync, and rewriting it so that it can sit inside the browser sandbox will probably add a significant amount of work in its own right. If that's work that nobody wants to do, then it's difficult to see what your solution actually is.
The current problem is that bugs in libxslt can have big security implications, so putting it or an equivalent XSLT 1.0 processor in a safe sandbox would make active maintenance far less urgent, since the worst-case scenario would just be presentation issues.
As for immediate work, some in this thread have proposed compiling libxslt to WASM and using that, which sounds perfectly viable to me, if inefficient. WASM toolchains have progressed far enough that very few changes are needed to a C/C++ codebase to get it to compile and run properly, so all that's left is to set up the entry points.
(And if there really were no one-for-one replacement short of a massive labor effort, then current XSLT users would be left with no simple alternative at all, which would make this decision all the worse.)
> their desire to minimize attack surface trumps any tendency to leave well enough alone.
It's that why Chrome unilaterally releases 1000+ web APIs a year, many of them quite complex, and spanning a huge range of things to go wrong (including access to USB, serial devices etc.)? To reduce the attack surface?
Well, their desire to stay trendy trumps their desire to minimize attack surface, I'd have to imagine. Alas, XML is roughly the polar opposite of trendy, mostly seen as belonging in the trash heap of the 90s alongside SOAP, CORBA, DCOM, Java applets, etc.
How do we feel about this concern in general? Not just specific to XSLTs
> my main concern is for the “long tail” of the web—there's lots of vital information only available on random university/personal websites last updated before 2005
It's a strong argument for me because I run a lot of old webpages that continue to 'just work', as well as regularly getting value out of other people's old pages. HTML and JS have always been backwards compatible so far, or at least close enough that you get away with slapping a TLS certificate onto the webserver
But I also see that we can't keep support for every old thing indefinitely. See Flash. People make emulators like Ruffle that work impressively well to play a nostalgic game or use a website on the Internet Archive whose main menu (guilty as charged) was a Flash widget. Is that the way we should go with this, emulators? Or a dedicated browser that still gets security updates, but is intended to only view old documents, the way that we see slide film material today? Or some other way?
It seems like they've already created a browser extension that'll act as as polyfill [0]. Chrome just don't want to ship it & maintain it. Which is very similar to Ruffle.
This would be sad, but I think it's sadder that we didn't spend more effort integrating more modern XSLT. It was painful to use _but_ if it had a few revisions in the browser I think it would have been a massive contender to things like React.
XML was unfairly demonized for the baggage that IBM and other enterprise orgs tied to it, but the standard itself was frigging amazing and powerful.
I have to agree. I liked XSLT and would have done much more with just a few additions to it.
Converting a simple manually edited xml database of things to html was awesome. What I mostly wanted the ability to pass in a selected item to display differently. That would allow all sorts of interactivity with static documents.
> @whatwg whatwg locked as too heated and limited conversation to collaborators
Too heated? Looked pretty civil and reasonable to me. Would it be ridiculous to suggest that the tolerance for heat might depend on how commenters are aligned with respect to a particular vendor?
It's a little jarring that the 1 comment visible underneath that is a "Nice, thanks for working on this!", and if you click on the user that wrote it, it's someone working for Google on Chrome... sheesh, kiss-ass much?
FYI, I heard that it was Apple employees who administer that repo that marked those comments as off topic and locked the thread, but people are attributing that to the Google employee that opened the issue.
I disagree - I saw a number of comments I would consider rude and unprofessional and once a PR gets posted on HN, frankly it typically gets much worse.
I find people on HN are often very motivated reasoners when it comes to judging civility, but there’s basically no excuse for calling people “fuckers” or whatever.
> We didn't forgot your decade of fuckeries, Google.
> You wanted some heated comment? You are served.
> the JavaScript brainworm that has destroyed the minds of the new generation
> the covert war being waged by the WHATWG
> This is nothing short of technical sabotage, and it’s a disgrace.
> breaking yet another piece of the open web you don't find convenient for serving people ads and LLM slop.
> Are Google, Apple, Mozilla going to pay for the additional hosting costs incurred by those affected by the removal of client-side XSLT support?
> Hint: if you don't want to be called out on your lies, don't lie.
> Evil big data companies who built their business around obsoleting privacy. Companies who have built their business around destroying freedom and democracy.
> Will you side with privacy and freedom or will you side with dictatorship?
Bullshit like this has no place in an issue tracker. If people didn’t act like such children in a place designed for productive conversation, then maybe the repo owners wouldn’t be so trigger happy.
I love XSLT. I released a client-side XSLT-based PWA last year (https://github.com/ssg/eksi-yedek - in Turkish). The reason I had picked XSLT was that the input was in XML, and browser-based XSLT was the most suitable candidate for a PWA.
Two years ago, I created a book in memory of a late friend to create a compilation of her posts on social media. Again, thanks to XSLT, it was a breeze.
XSLT has been orphaned on the browser-side for the last quarter century, but the story on the server-side isn't better either. I think that the only modern and comprehensive implementation comes with Saxon-JS which is bloated and has an unwieldy API for JavaScript.
Were XSLT dropped next year, what would be the course of action for us who rely on browser-based XSLT APIs?
XSLT, especially 3.0, is immensely powerful, and not having good solutions on JS ecosystem would make the aftermath of this decision look bleaker.
Fwiw the XSLT implementation in Blink and WebKit is extremely inefficient. For example converting the entire document into a string, to parse it to a format that's compatible with libxslt, to then produce a string and parse it back into a node structure again. I suspect a user space library could be similarly as effective.
It seems like the answer to the compat issue might be the MathML approach. An outside vendor would need to contribute an implementation to every browser. Possibly taking the very inefficient route since that's easy to port.
I have no opinion on this, just sharing my one-and-only XSLT story.
My first job in software was as a software test development intern at a ~500 employee non-profit, in about 2008 when I was about 19 or 20 years old. Writing software to test software. One of my tasks during the 2 years I worked there was to write documentation for their XML test data format. The test data was written in XML documents, then run through a test runner for validation. I somehow found out about XSLT and it seemed like the perfect solution. So I wrote up XML schemas for the XML test data, in XSD of course. The documentation lived in the schema, alongside the type definitions. Then I wrote an XSLT document, to take in those XML schemas and output HTML pages, which is also basically XML.
So in effect what I wrote was an XML program, which took XML as input, and outputted XML, all entirely in the browser at document-view time.
And it actually worked and I felt super proud of it. I definitely remember it worked in our official browser (Internet Explorer 7, natch). I recall testing it in my preferred browser, Firefox (version 3, check out that new AwesomeBar, baby), and I think I got it working there, too, with some effort.
I always wonder what happened with that XML nightmare I created. I wonder if anyone ever actually used it or maybe even maintained it for some time. I guess it most likely just got thrown away wholesale during an inevitable rewrite. But I still think fondly back on that XSLT "program" even today.
I wrote my personal website in XML with XSLT transforming into something viewable in the browser circa 2008. I was definitely inspired by CSS Zen Garden where the same HTML gave drastically different presentation with different CSS, but I thought that was too restrictive with too much overly tricky CSS. I thought the code would be more maintainable by writing XSLT transforms for different themes of my personal website. That personal webpage was my version of the static site generator craze: I spent 80% of the time on the XSLT and 20% on the content of the website. Fond memories, even though I found XSLT to be incredibly difficult to write.
Ha! Shout out to CSS Zen Garden. I didn't go as far down the rabbit hole as you did (noped out before XSLT made its way into my mix), but around that time I made sure all of my html was valid XML (er, XHTML), complete with the little validation badge at the bottom of the page. 80:20 form to content ratio sounds about right.
almost same. wrote a xml cms and then the xslt into html... then realized I would have to continue to write xml and said hell no! and rewrote the whole thing with php and a mysql db.
I implemented the full XPath and XSLT language with debugging capabilities for a company I used to work for some 25ish years ago. It was fun (until XPath and XSLT 2. Well that was fun too but because of nice work colleague not the language) but I always did wonder how this took off and Lisp didn’t.
I was quite fond of DokuWiki’s xml-rpc. Probably long replaced now but it was a godsend to have a simple rpc to the server from within javascript. (2007)
I once attempted to use XSLT to transform SOAP requests generated by our system so the providers' implementation would accept them. This included having to sufficiently grok XSD, WSDL el at to figure out what part of the chain is broken.
At the end of the (very long) process, I just hard-coded the reference request XML given by the particularly problematic endpoints, put some regex replacements behind it, and called it a day.
We can laugh at NFTs but honestly there are a lot of technical solutions that fit the "kinda works/kinda seems like a good idea" but in the end it's a house of cards with a vested interest
Imagine people put energy into writing that thick of a book about XML. To be filed into the Theology section of a library
It's not like the browsers can just switch to some better maintained XSLT library. There aren't any. There are about 1.5 closed-source XSLT 3 implementations, Altova and Saxonica. I don't want to sound ageist, but the latter is developed by the XSLT spec's main author, who is nearing retirement age. This library is developed behind closed doors, and from time to time zip files with code get uploaded to GitHub. Make of that what you will in terms of the XSLT community. For all of its elegance, XSLT doesn't seem very relevant if nobody is implementing it. I'm all for the open web, but XSLT should just be left in peace to slide into the good night.
Saxonica is an Employee Ownership Trust and the team as a whole is relatively young (far off from retirement).
"Saxonica today counts some of the world's largest companies among its customer base. Several of the world's biggest banks have enterprise licenses; publishers around the world use Saxon as a core part of their XML workflow; and many of the biggest names in the software industry package Saxon-EE as a component of the applications they distribute or the services they deploy on the cloud."
Best comment from another related thread (not from me):
So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
Shortly after that, Google Chrome want to remove XSLT support.
PS2: Reminds me all of this https://xkcd.com/2347/ A shame that libxml and libxslt could not get more support while used everywhere. Thanks for all the hard work to the unpaid volunteers!
This seems totally fine though? XSLT 1.0 supporter says the support time is costing heavily, then Chrome says removing support is fine, which seems to align to both of them.
It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:
> "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"
I used XSLT once to publish recipes on the web. The cookbook software my mom used (maybe MasterCook?) could "export as xml" and I wrote an xslt to transform it into readable html. It was fine. It's, of course, also possible to run the XSLT from the command line to generate static html.
The suggestion of using a polyfill is a bit nonsensical as I suspect there is little new web being written in XSLT, so someone would have to go through all the old pages out there and add the polyfill. Anyone know if accomplishing XSLT is possible with a Chrome extension? That would make more sense.
It would sure be possible to combine a polyfill with a webextension, not sure if XSLT contains any footguns for this approach that would make it hard to do, but if it's solely a single client-side transformation of the initial XML response, this should work fine.
The idea of building something like PDF.js makes a lot of sense. I think the core crux of it though is the polyfill should be in the browser, not something that a site maintainer has to manually implement.
Yet, the web has been prospering for two decades in spite of the quasi-monopoly state of browsers. It's the living evidence that the dominant browser vendor doesn't has as much power as people imagine.
Like 90%+ of internet traffic goes to a handful of sites owned by tech giants. Most of what's left is SEO garbage serving those same tech giants' ad networks.
Obviously not things like blogs, or things you’d find via search, or independent forums, or newspaper websites. They certainly aren’t prospering.
But walled gardens like YouTube, Discord, ChatGPT and suchlike that are delivered via the browser are prospering. And as a cross platform GUI system, html is astonishingly popular.
There's a difference between web technologies and "the web" as an amorphous philosophical construct. Web technologies, as you stated, are obviously doing just fine. I'd argue the latter isn't. To be more specific, the latter as it was envisioned (in a way that I, and I speculate, GP also still subscribe to) 20+ years ago.
I don't think we all necessarily agree that "high value businesses" is the same as "prospering". If you mean "prospering" as in "making some people rich", sure, but if you mean "being beneficial to society at large", it's certainly debatable.
I had no idea what XSLT even was until today. Reading the submission, the thread linked by u/troupo below, and Wikipedia, I find that it's apparently used in RSS parsing by browsers, because RSS is XML and then XSLT is "originally designed for transforming XML documents into other XML documents" so it can turn the XML feed into an HTML page
I agree RSS parsing is nice to have built into browsers. (Just like FTP support, that I genuinely miss in Firefox nowadays, but allegedly usage was too low to warrant the maintenance.) I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use". Okay, why not stuff that polyfill into the browser then? Then it's already inside the javascript sandbox that you need to stay secure anyway, and everything just stays working as it was. Replacing some C++ code sounds like a win for safety any day of the week
On the other hand, I don't normally view RSS feeds manually. They're something a feed parser (in my case: Blogtrottr and Antennapod) would work with. I can also read the XML if there is a reason for me to ever look at that for some reason, or the server can transform the RSS XML into XHTML with the same XSLT code right? If it's somehow a big deal to maintain, and RSS is the only thing that uses it, I'm also not sure how big a deal it is to have people install an extension if they view RSS feeds regularly on sites where the server can do no HTML render of that information. It's essentially the same solution as if Chrome would put the polyfill inside the browser: the browser transforms the XML document inside of the JS sandbox
It's much more general purpose than that. RSS is just XML after all. XSLT basically lets you transform XML into some other kind of markup, usually HTML.
I think the principle behind it is wonderful. https://www.example.com/latest-posts is just an XML file with the pure data. It references an XSLT file which transforms that XML into a web page. But I've tried using it in the past and it was such a pain to work with. Representing things like for loops in markup is a fundamentally inefficient thing to do, JavaScript based templating is always going to win out from the developer experience viewpoint, especially when you're more than likely going to need to use JS for other stuff anyway.
It's one of those purist things I yearn for but can never justify. Shipping XML with data and a separate template feels so much more efficient than pre-prepared HTML that's endlessly repetitive. But... gzip also exists and makes the bandwidth savings a non-issue.
RSS likely isn't the only thing that uses it. XSLT is basically the client side declarative template language for XML/HTML that people always complain doesn't exist (e.g. letting you create your own tags or do includes with no server or build steps).
I understand that there are more possible uses for the tool, but RSS is the only one I saw someone mention. Are there more examples?
It may be that I don't notice when I use it, if the page just translates itself into XHTML and I would never know until opening the developer tools (which I do often, fwiw: so many web forms are broken that I have a habit of opening F12, so I always still have my form entries in the network request log). Maybe it's much more widespread than I knew of. I have never come across it and my job is testing third-party websites for security issues, so we see a different product nearly every week (maybe those sites need less testing because they're not as commonly interactive? I may have a biased view of course)
It's by far the easiest way to do templated pages. I use it for my personal stuff (e.g. photo albums I share with my mom), but I can't imagine Google cares about the non-commercial web.
I think I've read some governments still use it, which would make sense since they usually don't have a super high budget for tons of developers, so they have to stick to the easy way to do things.
Right, that sounds like a blind spot of mine as well. We test nearly only commercial products (or open source projects large enough to get commercial backing), and in private time, of course I'd come across big websites sooner than across small ones. Still, I'm surprised I never even heard of it (also considering we literally had a class on XML and the features, like these DTDs that I never found a use for in the decade since). Sounds like I should look into XSLT, since I also build a lot of small tools and simple old tech is generally right up my alley!
I use it to maintain our product catalog at work. The server does the final rendering of the complete document but as a page is getting edited the preview is getting rendered in the browser. Back to what everyone is saying, this isn't important enough to move the needle for people making these decisions.
Almost every single government organization uses it to publish their official documents. Lots of major corporations too.
As much of a monopoly as Chrome is, if they actually try to remove it they're likely to get a bunch of government web pages outright stating "Chrome is unsupported, please upgrade to Firefox or something".
Huh? I mainly see official government documents as annoying PDFs. Thankfully someone had the bright idea to turn the national law's text into a proper webpage and not use an image-like format for that. (I think regional governments also publish laws as PDF though.) Double checking now, yes: that's definitely HTML and not a transformed XML
Which government or governmental organizations are you talking about?
Practically every WordPress site with one of the top two SEO plugins (I'm not familiar with others) serves XML sitemaps with XSLT. It's used to make the XML contents human readable and to add a header explaining what it is.
Did you ever use a sitemap as a human? I've only ever seen it recommended for SEO, and search engines are perfectly capable of parsing sitemap.xml without needing it turned into some transformed format, or at least so was my understanding (been a while since I looked into sitemaps or SEO). It seems to only be linked in robots.txt, not to any humans: https://www.sitemaps.org/protocol.html#informing
Every (Wordpress) site with an SEO plugin should be fine, since the search engines can still read it and that's the goal of an SEO plugin
> I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use".
Especially considering the amount of complex standards they have qualms about from WebUSB to 20+ web components standards
> On the other hand, I don't normally view RSS feeds manually.
Chrome metrics famously underrepresent corporate installation. There could be quite a few corporate applications using XSLT as it was all the rage 15-20 years ago.
My guess is that they're fine with WebBluetooth/USB/FileSystem/etc. because the code for the new standard is recent and sticks with modern security sensibilities.
XSLT (and basically anything else that existed when HTML5 turned ten years old) is old code using old quality standards and old APIs that still need to be maintained. Browsers can rewrite them to be all new and modern, but it's a job very few people are interested in (and Google's internal structure heavily prioritizes developing new things over maintaining old stuff).
Nobody is making a promotion by modernizing the XSLT parser. Very few people even use XSLT in their day to day, and the biggest product of the spec is a competitor to at least three of the four major browser manufacturers.
XSLT is an example of exciting tech that failed. WebSerial is exciting tech that can still prove itself somehow.
The corporate installations still doing XSLT will get stuck running an LTS browser like they did with IE11 and the many now failed strains of technology that still supports (anyone remember ActiveX?).
We pentest lots of corporate applications so if this was widespreadly deployed in the last ~8 years that I've been doing the job full time, I don't know how I would have missed it (like, never even saw a talk about it, never saw a friend using it, never heard a colleague having to deal with it... there's lots of opportunities besides getting such an assignment myself). Surely there are talks on it if you look for it, just that I haven't the impression that this is a common corporate thing, at least among the kinds of customers we have (mainly larger organizations). A sibling comment mentions they use it on their hobby site though
I thought XML was that big hype, not XSLT. That I somehow never saw mentioned that you can do actual webpages and other useful stuff with it is probably why I never understood why people thought XML was so useful ^^' I thought it was just another data format like JSON or CSV, and we might as well have written HTML as {"body":{"p":"Hello, World!"}} and that it's just serendipity that XML was earlier
Huh! I'm learning a lot here today. Trying to find more info, indeed the top answer on stackoverflow on the "XSLT equivalent for JSON" is XSLT itself: https://stackoverflow.com/a/49011455/. Hard to find how you'd actually use it though, basically all results I get for "xslt json" are about different tools that convert between JSON and XML
At the time I ran across lots of real websites using it. I successfully used it myself at least once too. Off the top of my head, Blizzard was using it to format WoW player profiles for display in the browser.
So is metaverse, at least depending on the definition. Second Life is mentioned as an example of one on Wikipedia and that died pretty quickly because it was more of a mechanism instead of a destination in itself. The general concept of hanging out online with an avatar and friends is not gone at all
5G was another hype word. Can't say that's not useful! I don't really notice a difference with 4G (and barely with 3G) but apparently on the carrier side things got more efficient and it is very widely adopted
I guess there's a reason the Gartner hype cycle ends with widespread adoption and not with "dead and forgotten": most things are widely picked up for a reason. (Having said that, if someone can tell me what the unique selling point of an NFT was, I've not yet understood that one xD)
Actually, I think removing XSLT is bad because it means we are more tied to javascript or other languages for XML transformation instead of a language designed for this specific purpose, a DSL.
Which means more unreadable code.
But if they decide to remove XSLT from spec, I would be more than happy if they remove JS too. The same logic applies.
having browsers transform XML data into HTML via XSLT is a cool feature, and it works completely statically, without any server-side or client-side code. Would be a shame if that was removed. I have a couple dozen XML databases that I made accessible in a browser using xslt...
So annoying, XSLT is very powerful but browsers let it languish at 1.0
XSLT 1.0 is still useful though, and absolutely shouldn't be removed.
Them: "community feedback"
Also them: <marks everything as off topic>
This came about after the maintainer of libxml2 found giving free support to all these downstream projects (from billionaire and trillionaire companies) too much.
Instead of just funding him, they have the gall to say they don't have the money.
While this may be true in a micocosm of that project, the devs should look at the broader context and who they are actually working for.
The XSLT juice is worth the squeeze, but only to a tiny minority of users, and there's costly rewrites to do to keep XSLT in there (for Chrome, at least.)
Here's what I wish could happen: allow implementers to stub out the XSLT engine and tell users who love it that they can produce a memory-safe implementation themselves if they want the functionality put back in. The passionate users and preservationists would get it done eventually.
I know that's not a good solution because a) new xslt engine code needs to be maintained and there's an ongoing cost for that for very few users, b) security reviews are costly for the new code, c) the stubs themselves would probably be nasty to implement, have security implications, etc. And, there's probably reasons d-z that I can't even fathom.
It sucks to have functionality removed/changed in the web platform. Software must be maintained though; cost of doing business. If a platform doesn't burden you with too much maintenance and chooches along day after day, then it's usually a keeper.
This proposal seems to be aimed at removing native support in favor of a WASM-based polyfill (like PDF.js, I guess) which seems reasonable?
Google definitely throws its weight around too much w.r.t. to web standards, but this doesn't seem too bad. Web specifications are huge and complex so trying to size down a little bit while maintaining support for existing sites is okay IMO.
No, that would indeed be reasonable, but the proposal is to remove XSLT from the standard and remove Chrome support for XSLT entirely, forcing websites to adopt the polyfill themselves.
Which is, to me, silly. If you ship the polyfill then there's no discussion to be had. It works just the same as it always has for users and it's as secure as V8, no aging native codebase with memory corruption bugs to worry about.
> It works just the same as it always has for users
No it doesn't. An HTML page constructed with XSLT written 10 years ago will suddenly break when browsers remove XSLT. The webmaster needs to add the polyfill themselves. If the webmaster doesn't do that, then the page breaks.
From a user perspective, it only remains the same as before if the webmaster adopts the polyfill. From the web developer perspective, this is a breaking change that requires action. "shipping the polyfill" requires changes on many many sites - some of which have not needed to change in many years.
It may also be difficult to do. I'm not sure what their proposed solution is, but often these are static XML files that include an XSLT stylesheet - difficult to put JS in there.
At the moment, XSLT in a browser doesn't depend on Javascript, so works even if JS is turned off. Using a polyfill instead will mean that XSLT will only work if JS is turned on.
That depends how the browsers implement it, no? Much of modern browser's user interface is also built using web technologies including JS and that doesn't break if you "disable JS".
Last I checked, it’s a polyfill that Chrome won’t default include - they’re just saying that they’d have a polyfill in JS and it’s on site authors to use.
That breaks old unmaintained but still valuable sites.
As a user you can only use the polyfill to replace the XSLTProcessor() JavaScript API. You can't use the polyfill if you're using XSLT for XML Stylesheets (<?xml-stylesheet … ?> tags).
(But of course, XML Stylesheets are most widely used with RSS feeds, and Google probably considers further harm to the RSS ecosystem as a bonus. sigh)
Moz also has no love for RSS, having removed support for live bookmarks in Firefox 64 (2018) and no longer displaying the RSS icon anywhere in the UI when a website has any <link rel="alternate" type="application/rss+xml"> tags. If you want to subscribe to feeds you have to jump through a bunch of hoops instead of it being a single click.
Fortunately, Thunderbird still has support for feeds and doesn't seem to have been afflicted by the same malaise as the rest of the org chart. Who knows how long that will last.
Setting aside the discussion of the linked issue itself (tone, comments, etc), I feel like I need to throw this out there:
I don't understand the point in having a JS polyfill and then expecting websites to include it if they want to use XSLT stuff. The beauty of the web is that shit mostly just works going back decades, and it's led to all kinds of cool and useful bits of information transfer. I would bet money that so much of the weird useful XSLT stuff isn't maintained as much today - and that doesn't mean it's not content worth keeping/preserving.
This entire issue feels like it would be a nothing-burger if browser vendors would just shove the polyfill into the browser and auto-run it on pages that previously triggered the fear-inducing C++ code paths.
What exactly is the opposition to this? Even reading the linked issue, I don't see an argument against this that makes much sense. It solves every problem the browser vendors are complaining about and nothing functionally changes for end users.
Chrome is the dominant browser. Sad as this may be removing it from Blink means de facto removing it from the spec.
That being said, I'm not against removing features but neither this or the original post provide any substantial rationale on why it should be removed. Uses for XSLT do exist and the alternative is "just polyfill it" which is awkward especially for legacy content.
I don't get the people complaining that they need it on their low-power microcontrollers yet instead of using an XSLT library they'd rather pull in Chromium.
With how bloated browsers are right now, good riddance IMO
They are not talking about pulling in Chromium on a microcontroller. Their web server is on a microcontroller, so they want to minimize server side CPU usage and force the browser to do their XSLT transformation.
Since it's a microcontroller, modifying that server and pushing the firmware update to users is probably also a pain.
Yeah, I don't think XML + XLST is any better than or allows anything that sending say JSON and transforming it with JS wouldn't. However that would require changing the firmware, which as you mention may be difficult or impossible.
I think they're talking about outputting XML+XSLT on those microcontrollers, i.e. just putting out text. Chromium would come in for the viewer who's loading whatever tiny status-webpage those microcontrollers are hosting on a separate device.
There are better candidates to remove from the spec than XSLT, like HTML. The parsing rules for HTML are terrible and it hinders further advancement of the spec more than anything. The biggest mistake of HTML was back peddling on the switch to XHTML.
Removal of anything is problematic though, better off freezing parts of the spec to specific compatibility versions and getting browsers to ship optional compatibility modes that let you load and view old sites.
I saw XSLT used to transform RSS feeds into something nicely human readable. That is, the RSS feed was referencing the XSLT. Other than that I haven't noticed the use of XSLT on the web.
IBM owns a very high-performance XSLT engine they could probably open source or license to the browser makers. IF anyone from IBM is here (?), may want to consider it..
If security and memory-safety is a concern and there is already a polyfill, why remove the API form the standard instead of just using the WASM-based polyfill internally?
They want to punt a half-baked polyfill over the wall and remove support from the browser so they don't have to do any maintenance work, making it someone else's problem.
If this is in response to Nick Wellnhofers announcement from three months ago to stop embargoing/priorizing libxlst/libxml2 CVEs due to lack of manpower (which I suspect is a consequence of flooding projects with bogus LLM-generated findings from students wanting to butter their profile), wouldn‘t it be possible to ship an emscripten-compiled libxslt implementation instead of libxslt proper?
> Because I sometimes get similar letters from the Google Cloud Platform. They look like this:
>> Dear Google Cloud Platform User,
>> We are writing to remind you that we are sunsetting [Important Service you are using] as of August 2020, after which you will not be able to perform any updates or upgrades on your instances. We encourage you to upgrade to the latest version, which is in Beta, has no documentation, no migration path, and which we have kindly deprecated in advance for you.
>> We are committed to ensuring that all developers of Google Cloud Platform are minimally disrupted by this change.
>> Besties Forever,
>> Google Cloud Platform
> But I barely skim them, because what they are really saying is:
>> Dear RECIPIENT,
>> Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore. So drop your fucking plans and go start digging through our shitty documentation, begging for scraps on forums, and oh by the way, our new shit is COMPLETELY different from the old shit, because well, we fucked that design up pretty bad, heh, but hey, that’s YOUR problem, not our problem.
>> We remain committed as always to ensuring everything you write will be unusable within 1 year.
But if you live in a capitalist country with a free market, several competitors should pop out and suggest migrating your system into their cloud for free, shouldn't they? No way capitalist overlooks an unoccupied market niche.
Intent to remove: emergency services dialling (911, 112, 000, &c.)
Almost no one ever uses it: metrics show only around 0.02% of phone calls use this feature. So we’re planning on deprecating and then removing it.
—⁂—
Just an idea that occurred to me earlier today. XSLT doesn’t get a lot of use, but there are still various systems, important systems, that depend upon it. Links to feeds definitely want it, but it’s not just those sorts of things.
Percentages only tell part of the story. Some are tiny features that are used everywhere, others are huge features that are used in fewer places. Some features can be removed or changed with little harm—frankly, quite a few CSS things that they have declined to address on the grounds of usage fall into this category, where a few things would be slightly damaged, but nothing would be broken by it. Other features completely destroy workflows if you change or remove them—and XSLT is definitely one of these.
Do we know Webkit, KHTML and Gecko's stand on this?
I know this is for security reason but why not update the XSLT implementation instead. And if feature that aren't used get dropped, they might as well do it all in one good. I am sure lots of HTML spec aren't even used.
I get the impression they are ripping it out because they don't want to sponsor the FOSS volunteer working on it or deal w/ maintaining it themselves. The tracking/advertising take doesn't hold much water for me as adding those things to the page is something developers and companies choose to do. You could just as easily inject a tracking script tag or pixel or whatever via XSLT during transformation if you wanted.
No. The official statement from Brian was “I received a couple of personal e-mails from some credible people who stated that their data belonged to them, so we (I) decided to make it opt-in” (paraphrased).
I spent days in that thread. That uproar was “a bunch of noisy minority which doesn’t worth listening” for them.
It's good to know that's what it looks like. I can tell you that the shouting did not really influence the decision. Long-time Go contributors and supporters commenting quietly or emailing me privately had far greater influence.
So as a person who just started programming Go and made some good technical comments didn't matter at all. Only people with clout has mattered, and the voice had to come from the team itself. Otherwise we the users' influence is "fuck all" (sorry, my blood boils every time I read this comment from Russ).
I mean yeah, I too would probably prefer to read a few well-reasoned arguments over email than to wade through hundreds of hateful, vitriolic, accusatory comments from randos in a GitHub thread. Being an open-source maintainer is hard.
Or, you know, do the right thing from the start considering that forced telemetry you have to opt-out of is universally reviled and every project that includes it suffers from literally the same issues.
Looks like they're going to ram it through anyway, no matter the existing users. There's got to be a better way to deal with spam than just locking the thread to anyone with relevant information.
WHATWG literally forced W3C to sign a deal and obey their standards. WHATWG is basically Google + Apple + Microsoft directly writing the browser standards. Fixing Microsoft's original mistake of Internet Explorer of not creating a faux committee lol.
"Heated discussion" sounds like any comment voicing legitimate concern being hidden as "off-topic", and the entire discussion eventually being locked. Gives me Reddit vibes, I hope this is not how open web standards are managed.
This seems like the kind of thing that won't require any resources to maintain, other than possible bugfixes (which 3rd parties can provide). It only requires parsing and DOM manipulation, so it doesn't really require any features of JS or WASM that would be deprecated in the future, and the XSLT standard that is supported by browsers is frozen - they won't ever have to dedicate resources to adding any additional features.
That is an interesting approach, you could suggest it? In general using JS to implement web APIs is very difficult, but using WASM might work especially for the way XSLTProcessor works today.
This is disappointing. I was using XSLT for transforming SVGs, having discovered it early last year via a chat. Even despite browsers only shipping with v1.0 it still allowed a quite compact way to manipulate them without adding some extra parser dependency.
Wait, all the web browsers had XSLT support all along?
I remember using these things in a CSCI class, and, IIRC, we were using something akin to Tomcat to do transformations on the server, before serving HTML to the browser, circa 2005/2006.
I had to look up what XSLT was (began working professionally as a programmer in 2013). Honestly, if it simplifies the spec, at this point it seems like a good idea to remove it.
But at the same time, people don't want web pages and web apps to become all fully opaque like Flutter web or complex, minified JS-heavy sites. Even the latter have many a11y benefits of markup.
I think that's a tradeoff.
Simplest approach would be to just distribute programs, but the Web is more than that!
Another simple approach would be to have only HTML and CSS, or even only HTML, or something like Markdown, or HTML + a different simple styling language...
and yet nothing of that would offer the features that make web development so widespread as a universal document and application platform.
I think most people just don't care, although the a11y benefits are truely important. HTML isn't going anywhere and often you need JS to make things more accessible.
But like, most people just want a site to work and provide value, save them time etc and the way the site is built is entirely unimportant. I find myself moving towards that side despite being somewhat of a web purist for years.
The vision of XML was a semantic web. Nowadays everybody knows that semantic is 'em' and non-semantic is 'b' or 'i'. This is simple, but wrong. In fact a notation is semantic when you can make all the distinctions you care about and (this is important) do not have to make distinctions you do not care about. In this case every distinction means something and thus is semantic.
How do you apply this to documents? They are so different. XML gives the answer: you INVENT a notation that suits just your case and use it. This way you perfectly solve the enigma of semantic.
OK, fine, but what to do with my invented notation? Nobody understands it. Well, that is OK. You want to render it as HTML; HTML has no idea about you notation, but is (was) also a kind of XML, so you write a transformation from your notation to HTML. Now you want to render it for printed media: here is XSL-FO, go ahead. Or maybe you want to let blind people read your document too; here is (a non-existent) AUDIO-ML, just add a transformation into this format. In fact there could be lots of different notations for different purposes (search, for instance) and they are all within a single transformation step.
And for that transformation we give you a tool: XSLT.
(I remember a piece discussed here; it was about different languages and one of examples of very simple languages was XSLT. It is my impression as well; XSLT is unconventional, but otherwise very simple.)
Of course you do not have to invent a new notation each time. It's equally fine to invent small specific notations and mix them with yours.
For example, imagine a specific chess notation. It allows you to describe positions and a sequence of moves, giving you either a party or a composition. You write about chess and add snippets in this notation. First, it can be very expressive; referring to a position should take no more than:
<position party="#myparty" move="22w" />
Given the party is described this can render the whole board. Or you can refer to a sequence of moves:
<moves party="#myparty" from="22w" to "25b" />
and this can be rendered in any chess move notation.
And then imagine a specific search engine that crawls the web, indexes parties and compositions and then can search, for example, for other pages that discuss this party, or for similar positions, or for matching sequences of moves.
XML even had a foundation to incorporate other notations. XML itself is, indeed, verbose (although this can be lessened with a good design, which is rare), but starting from v1.0 it has a way to formally indicate that contents of an element are written in a specific notation. If that direction was followed it could lead to things like:
The vision of XML was federated web. Lots of notations, big and small, evolving and mixing. It was dismissed on the premise it was too strict. I myself think it was too free.
The HTML spec is actually constantly evolving. New features like the dialog element [0] and popover [1] were added every year. But removing something from the spec is very rare, if it ever happened before.
The W3C spec was. But WHATWG and HTML5 represent a coup by the dominant browser corporations (read: Google). The biggest browser dictates the "living standard" and the W3C is forced into a descriptivist role.
The W3C's plan was for HTML4 to be replaced by XHTML. What we commonly call HTML5 is the WHATWG "HTML Living Standard."
They weren't sidelined because they had bad ideas (XHTML 2.0 had a lot of great ideas, many of which HTML5 eventually "borrowed"), they were sidelined because they still saw the web as primarily a document platform and Google especially was trying to push it as a larger application platform. It wasn't a battle between the ivory tower and practical concerns, it was a proxy battle in the general war between the web as a place optimized to link between meaningful, accessibility-first documents and the web as a place to host generalized applications with accessibility often an afterthought. (ARIA is great, but ARIA can only do so much, not as much of it by default/a pit of success as XHTML 2.0 once hoped to be.)
it will. It will make old non-updated pages break with same fate as old outdated pages which used MathML in the past and were not updated with polyfills.
Who else is watching this who grew up watching this same movie play out with Microsoft/IE as the villain and Google as the hero? (Anyone want to make the "live long enough" quote?)
I'm sorry but I don't understand this. If a polyfill can add xslt support then why don't browser vendors ship the polyfill and apply it automatically when necessary?
As much as I think XSLT is cool, if it's used by practically nobody and contains real security vulnerabilities... oh well. You can't deny that combination is a good objective reason to remove it.
And browsers are too big with too many features; reducing the scope of what a browser does is good (but not enough by itself to remove a feature).
Maybe one day it will come back as a black-box module running in an appropriate sandbox - like I think Firefox uses for PDF rendering.
A few things to note:
- This isn't Chrome doing this unilaterally. https://github.com/whatwg/html/issues/11523 shows that representatives from every browser are supportive and there have been discussions about this in standards meetings: https://github.com/whatwg/html/issues/11146#issuecomment-275...
- You can see from the WHATNOT meeting agenda that it was a Mozilla engineer who brought it up last time.
- Opening a PR doesn't necessarily mean that it'll be merged. Notice the unchecked tasks - there's a lot to still do on this one. Even so, give the cross-vendor support for this is seems likely to proceed at some point.
Also, https://github.com/whatwg/html/issues/11523 (Should we remove XSLT from the web platform?) is not a request for community feedback.
It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
This is happening after some serious exploits were found: https://www.offensivecon.org/speakers/2025/ivan-fratric.html
And the maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
There is a better alternative to libxslt - xee[1][2]. It was discussed[3] on HN before.
[1] https://blog.startifact.com/posts/xee/
[2] https://github.com/Paligo/xee
[3] https://news.ycombinator.com/item?id=43502291
Disclaimer: I work on Chrome/Blink and I've also contributed a (very small) number of patches to libxml/libxslt.
It's not just a matter of replacing the libxslt; libxslt integrates quite closely with libxml2. There's a fair amount of glue to bolt libxml2/libxslt on to Blink (and WebKit); I can't speak for Gecko.
Even when there's no work on new XML/XSLT features, there's a passive cost to just having that glue code around since it adds quirks and special cases that otherwise wouldn't exist.
> Xee implements modern versions of these specifications, rather than the versions released in 1999.
My understanding is that browsers specifically use the 1999 version and changing this would break compat
As if removing XSLT entirely won’t break back-compat?
I think this discussion is quite reasonable, but it also highlights the power imbalance: If this stuff is decided in closed meetings and the bug trackers are not supposed to be places for community feedback, where can the community influence such decisions?
I think it depends on the spec. Some of the working groups still have mailing lists, some of them have GitHub issues.
To be completely honest, though, I'm not sure what people expect to get out of it. I dug into this a while ago for a rather silly reason and I found that it's very inside baseball, and unless you really wanted to get invested in it it seems like it'd be hard to meaningfully contribute.
To be honest if people are very upset about a feature that might be added or a feature that might be removed the right thing to do is probably to literally just raise it publicly, organize supporters and generally act in protest.
Google may have a lot of control over the web, but note that WEI still didn't ship.
If people are upset about xslt being removed, step 1 would have been to actually use it in a significant way on the web. Step 2 would have been to volunteer to maintain libxslt.
Everyone likes to complain as a user of open source. Nobody likes to do the difficult work.
What use would count as significant? Only if big corp like Google uses it?
XSLT is used on the web. That's why people are upset about Google & friends removing it while ignoring user feedback.
Yep, there's a massive bias in companies like Google, Amazon, Microsoft to only see companies their own size.
Outside of this is a whole universe.
Didn't someone step up to volunteer ot maintain libxslt a few weeks ago? https://gitlab.gnome.org/GNOME/libxslt/-/issues/150
Knowing our luck it’s probably Jia Tan.
I'm not that familiar with XSLT but isn't it already quite hobbled? Can it be used in a significant way? Or is this a chicken-egg problem where proving it's useful requires the implementation to be filled out first.
On the link in the post you can scroll down to someone’s comment with a few links to XSLT in action.
It’s been years since I’ve touched it, but clicking the congressional bill XML link and seeing a perfectly formatted and readable page reminded me of exactly why XSLT has a place. To do the same thing without it, you’d need some other engine to parse the XML, convert it to HTML, and then ensure the proper styles get applied - this could of course be backend or frontend, either way it’s a lot of engineering overhead for a task that, with XSLT, requires just a stylesheet.
Do Library of Congress and Congress count as significant usage?
https://news.ycombinator.com/item?id=44958929
WhatWG has a fairly well documented process for feature requests. Issues are not usually decided in closed meetings. But there’s a difference between constructive discussion and the stubborn shameless entitlement that some members of the community are displaying in their comments.
https://blog.whatwg.org/staged-proposals-at-the-whatwg
No. WhatWG only has a process for adding and approving features.
It has no process for discussing removal of features or for speaking out against a feature
Fwiw the meetings aren't closed, unlike w3c the whatwg doesn't require paid membership to attend.
The bug trackers are also a fine place to provide community feedback. For example there's plenty of comments providing use cases that weren't hidden. But if you read the hidden ones (especially on the issue rather than PR) there's some truly unhinged commentary that rightly resulted in being hidden and unfortunately locking of the thread.
Ultimately the way the community can influence decisions is to not be completely unhinged.
Like someone else said the other way would be to just use XSLT in the first place.
Honestly, your chance to impact this decision was when you decided what technologies to use on your website, and then statistically speaking [1], chose not to use XSLT in the browser. If the web used it like crazy we would not be having this conversation.
Your other opportunity is to put together a credible plan to resource the XSLT implementations in the various browsers. I underline, highlight, bold, and italicize the word "credible" here. You are facing an extremely uphill battle from the visible lack of support for the development; any truly credible offer should have come many years ago. Big projects are well aware of the utility of last-minute, emotionally-driven offers of support in the midst of a burst of publicity, viz, effectively zero.
I don't know that the power is as imbalanced as people think here so much as a very long and drawn out conversation has been had by the web as a whole, on the whole the web has agreed this is not a terribly useful technology by vast bulk of implementation work, and this is the final closing chapter where the browsers are basically implementing the will of the web. The standard for removal isn't "literally 0 usage in the entire world", and whatever the standard is, if XSLT isn't on the "remove" side of it, that would just be a sign it needs to be tuned up because XSLT is a complete non-entity on the web. If you are not feeling like your voice is being respected it's because it's one of literally millions upon millions; what do you expect?
[1]: I know exceptions are reading this post, but you are exceptions. And not terribly common ones.
Statistically, how many websites are using webusb? I'm guessing fewer than xslt, which is used by e.g. the US Congress website.
I have a hard time buying the idea that document templating is some niche use-case compared to pretty much every modern javascript api. More realistically, lots of younger people don't know it's there. People constantly bemoan html's "lack" of client side includes or extensible component systems.
You seem to be assuming that I would argue against removing webusb. If it went through the same process and the system as a whole reached the same conclusion, I wouldn't fight it too hard personally.
There's probably half-a-dozen other things that could stand serious thought about removal.
There is one major difference though, which is that if you remove webusb, the functionality is just gone, whereas XSLT can be done through Javascript/WebASM just fine.
Document templating is obviously not a niche case. That's why we've got so many hundreds of them. We're not lacking in solutions for document templating, we're drowning in them. If XSLT stands out in its niche, it is as being a particularly bad choice, which is why nobody (to that first approximation we've all heard so much about) uses it.
Where is the US Congress's website identified as a potentially impacted site? https://chromestatus.com/metrics/feature/timeline/popularity...
edit: I see Simon mentioned it - https://simonwillison.net/2025/Aug/19/xslt/ - e.g., https://www.congress.gov/119/bills/hr3617/BILLS-119hr3617ih.... - the site seems to be even less popular than Longhorn Steakhouse in Germany.
My guess is that they'll shuffle people to PDF or move rendering to the server side, which is a common (and, with today's computing power, extremely cheap) way to generate HTML from XML.
Is it cheaper than sending XML and a stylesheet though?
Further, PDF and server-side are fine for achieving the same display, but it removes the XML of it all - that is to say, someone might be using the raw XML to lower tools, feeds, etc. if XSLT goes away and congress drops the XML links in favor of PDFs etc, that breaks more than just the pretty formatting
1. No, not cheaper, but the incremental cost of server-side rendering is minimal (especially at the low request rates these pages receive)
2. One should still be able to retrieve the raw XML document. It's just that it won't be automatically transformed client-side.
i just built a website in XSLT and implementing some form of client side include in XSLT is not easier than doing the same in javascript. while i agree with you that client side include is sorely missing in HTML, XSLT is not the answer to that problem. anyone who doesn't want to use javascript to implement client-side include, won't want to use XSLT either.
> If the web used it like crazy we would not be having this conversation.
It's been a standard part of the Web platform for years. The only question should be, "Is _anyone_ using it?", not whether it's being "used like crazy" or not.
Don't break the Web.
Counterpoint: most websites are not useful. If we only count useful websites a much higher percentage of them are using XSLT.
But useful websites are much less likely to be infested by the all consuming Goo admalware.
[Citation needed]
Seriously, i doubt this.
A lot of very old SPA like heavy applications use XSLT. Basically, enterprise web applications (not websites) that predate fetch, rest, and targeted or still target Internet Explorer 5/6.
There was a time where the standard way to build a highly interactive SPA was using SOAP services on the backend combined with iframes on the front end that executed XSLT in the background to update the DOM.
Obviously such an approach is extremely out of date and you won't find it on any websites you use. But, a lot of critical enterprise software was built this way and is kind of stuck like this.
> Internet Explorer 5/6
Afaik IE 5 did not support XSLT. It supported a proprietary similar language that was different. I think IE6 was first version to support XSLT.
I feel like when i see enterprise xslt a lot of it is serverside.
I ran xslt in foreground, it was fast enough for that even on celeron and 128mb RAM. Imagine running modern web 2.0 on 128mb RAM.
I secondly doubt this. Would love a succinct list of "important" websites.
Do Library of Congress and Congress count? https://news.ycombinator.com/item?id=44958929
It's not for the public to identify these sites. It's for the arrogant Googlers to do a modicum of research
At first glance the library of congress link appears to be using server side XSLT, which would not be affected by this proposal.
The congress one appears to be the first legit example i have seen.
At first glance the congress use case does seem like it would be fully covered by CSS [you can attach CSS stylesheets to generic xml documents in a similar fashion to xslt]. Of course someone would have to make that change.
> Of course someone would have to make that change.
Of course. And yet none of the people from Google even seem to be aware of
> The congress one appears to be the first legit example i have seen.
There are more. E.g. podcast RSS feeds are often presented on the web with XSLT: https://feeds.buzzsprout.com/231452.rss
Again, none of the people from Google even seem to be aware of these use cases, and just power through regardless of any concerns.
They are easy to understand :) Modern browsers became such bloatware beyond salvation, they start to feel all the tech debt.
> Of course. And yet none of the people from Google even seem to be aware of
I don't see any reason to assume that. I don't think anyone from google is claiming the literal number of sites is 0, just that it is insignificant.
I am very sure the people at google are aware of the rss feed usage.
Don't confuse people disagreeing with you with people not understanding you.
> I am very sure the people at google are aware of the rss feed usage.
No. No they aren't. As you can see in the discussion: https://github.com/whatwg/html/issues/11523 where the engineer who proposed this literally updates his "analysis" as people point out use cases he missed.
Quote:
--- start quote ---
albertobeta: there is a real-world and modern use case from the podcasting industry, where I work. Collectively, we host over 4.5 million RSS feeds. Like many other podcast hosting companies, we use XSLT to beautify our raw feeds and make them easier to understand when viewed in a browser.
mfreed7, the Googler https://github.com/whatwg/html/issues/11523#issuecomment-315... : Thanks for the additional context on this use case! I'm trying to learn more about it.
--- end quote ---
And then just last week: https://github.com/whatwg/html/issues/11523#issuecomment-318...
--- start quote ---
Thanks for all of the comments, details, and information on this issue. It's clear that XSLT (and talk of removing it) strikes a nerve with some folks. I've learned a lot from the posts here.
--- end quote ---
> Don't confuse people disagreeing with you with people not understanding you.
Oh, they don't even attempt to understand people.
Here's him last week adding a PR to remove XSLT from the spec: https://github.com/whatwg/html/pull/11563
Did he address any of the issues? Does he link to any actual research pointing out how much will be broken, where it's used etc.?
Nope.
But then another Googler pulls up, says "good work, don't forget to remove it everywhere else". End of discussion.
I stand by my previous comment.
You're angry you didn't get your way, but the googler's decision seems logical, i think most software developers maintaining a large software platform would have made a similar decision given the evidence presented (as evidenced by other web browsers making the same one).
The only difference here between most software is that google operates somewhat in the open. In the corporate world there would be some customer service rep to shield devs from the special interest group's tantrum.
It's worse than that, of course. XSLT removal breaks quite a few government and regulatory sites: https://github.com/whatwg/html/issues/11582
You're naming Google specifically, when it's not just Google. This seems like a you thing, separate to the actual issue at hand.
Well, it's Google who jumped at the opportunity citing their own counters and stats.
Just like they did the last time when they tried to remove confirm/prompt[1] and were surprised to see that their numbers don't paint the full picture, as literally explicitly explained in their own docs: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
You'd think that the devs of the world's most popular browser would have a little more care than just citing some numbers, ignoring all feedback, and moving forward with whatever they want to do?
Oh. Speaking, of "not just Google".
The question was raised in this meeting: https://github.com/whatwg/html/issues/11146#issuecomment-275... Guess what.
--- start quote ---
dan: even if the data were accurate, not enough zeros for the usage to be low enough.
brian: I'm guessing people will have objections... people do use it and some like it
--- end quote ---
[1] See, e.g. https://gomakethings.com/google-vs.-the-web/
That's not completely wrong, but also misses some nuance. E.g. the thread mentions the fact that web support is still stuck at XSLT 1.0 as a reason for removal.
But as far as I know, there were absolutely zero efforts by browser vendors before to support newer versions of the language, while there was enormous energy to improve JavaScript.
I don't want to imply that if they had just added support for XSLT 3.0 then everyone would be using XSLT instead of JavaScript today and the latest SIMD optimizations of Chrome's XPath pipeline would make the HN front-page. The language is just too bad for that.
But I think it's true that there exists a feedback loop: Browsers can and do influence how much a technology is adopted, by making the tech less or more painful to use. Then turning around and saying no one is using the tech, so we'll remove it, is a bit dishonest.
Javascript was instantly a hit from the day it was released, and it grew from there.
XSLT never took off. Ever. It has never been a major force on the web, not even for five minutes. Even during the "XML all the things!" phase of the software engineering world, with every tailwind it would ever had, it was never a serious player.
There was, at no point, any reason to invest in it any farther.
Moreover, even if you push a button and rewrite history so that even so it was heavily invested in anyhow, I see no reason to believe it would have ever been a major force in that alternate history either. I would personally contend that it has always been a bad idea, and if anything, it has been unduly propped up by the browsers and overinvested in as it is. But perhaps less inflammatorily and more objectively, it has always been a foreign paradigm that most programmers have no experience in, and this was even more true in the "XML all the things!" era which predates the initial Haskell burst that pushed FP forward by a good solid decade, and the prospects of it ever being popular were never all that great.
i also don't see XSLT solving any problem that javascript could not solve. heck, if you rally need XSLT in the browser, using javascript you could even call some library like saxonjs, or you could run it webassembly.
True, but that raises the question, why don't the browsers do that? I think no one would object if they removed XSLT from the browser's core and instead loaded up some WASM/JavaScript implementation when some XSLT is actually encountered. Sort of like a "built-in extension".
Then browser devs could treat it like an extension (plus some small shims in the core) while the public API wouldn't have to change.
because there is no demand for it.
You can have template includes that are auto interpreter by the browser - no need to write code AFAIK using XSLT.
XSLT is code. code written with XML syntax. let me give you an example:
in order to create a menu where the current active page is highlighted and not a link, i need to do this:
XSLT is interesting because it has a very different approach to parsing XML, and for some transformations the resulting code can be quite compact. in particular, you don't have an issue with quoting/escaping special characters most of the time while still being able to write XML/HTML syntax. but then JSX from react solves that too. so the longer you look at it the less the advantages of XSLT stand out.You're sort of exaggerating the boilerplate there; a more idiomatic, complete template might be:
One nice thing about XSLT is that if you start with a passthrough template: You have basically your entire "framework" with no need to figure out how to set up a build environment because there is no build environment; it's just baked into the browser. Apparently in XSLT 3.0, the passthrough template is shortened to just `<xsl:mode on-no-match="shallow-copy"/>`. In XSLT 2.0+ you could also check against `base-uri(/)` instead of needing to pass in the current page with `<nav-menu current-page="foo.xhtml"/> and there's no `param` and `with-param` stuff needed. In modern XSLT 3.0, it should be able to be something more straightforward like: The other nice thing is that it's something that's easy to grow into. If you don't want to get fancy with your menu, you can just do: And now you have a `<nav-menu/>` component that you can add to any page. So to the extent that you're using it to create simple website templates but you're not a "web dev", it works really well for people that don't want to go through all of the hoops that professional programmers deal with. Asking people to figure out react to make a static website is absurd.wow, thank you. your first example is actually what i have been trying to do but i could not get it to work. i did search for examples or explanations for hours (spread over a week or so). i found the documentation of each of the parts and directives used, but i just could not figure out how to pull it together.
your last example is what i started out with, including the pass through template. you may remember this message from almost two months ago: https://news.ycombinator.com/item?id=44398626
one comment for the xslt 3 example: href="" doesn't disable the link. it's just turns into a link to self (which it would be anyways if the value was present). the href attribute needs to be gone completely to disable the link.
unfortunately i hit another snag: https://stackoverflow.com/questions/3884927/how-to-use-xsl-v...
nodes you output don't have type "node-set" - instead, they're what is called a "result tree fragment". You can store that to a variable, and you can use that variable to insert the fragment into output (or another variable) later on, but you cannot use XPath to query over it.
the xsl documentation https://www.w3.org/TR/xslt-10/#variables says:
Variables introduce an additional data-type into the expression language. This additional data type is called result tree fragment. A variable may be bound to a result tree fragment instead of one of the four basic XPath data-types (string, number, boolean, node-set). A result tree fragment represents a fragment of the result tree. A result tree fragment is treated equivalently to a node-set that contains just a single root node. However, the operations permitted on a result tree fragment are a subset of those permitted on a node-set. An operation is permitted on a result tree fragment only if that operation would be permitted on a string (the operation on the string may involve first converting the string to a number or boolean). In particular, it is not permitted to use the /, //, and [] operators on result tree fragments.
so using apply-templates on a variable doesn't work. this is actually where i got stuck before. i just was not sure because i could not verify that everything else was correct.
i wonder if it is possible to load the menu from a second document: https://www.w3.org/TR/xslt-10/#document
edit: it is!
now i just need to finetune this because somehow the $current param fails now.Ah, I could've sworn that it worked in some version of the page that I tried as I iterated on things, but it could be that the browser just froze on my previously working page and I fooled myself.
Adding xmlns:exsl="http://exslt.org/common" to your xsl:stylesheet and doing select="exsl:node-set($nav-menu-items)/item" seems to work on both Chrome and Librewolf.
tried that, getting an empty match.
here is the actual stylesheet i am using:
documents look like this: if i use the document() function, with nav-menu.xml looking like this: then i get the menu items, but the test <xsl:when test="@href=$current"> failsIt looks like it's related to your setting the default namespace xmlns="http://www.w3.org/1999/xhtml". You could either add a xmlns:example="http://example.org/templates" and then replace `item` with `example:item` everywhere, or you can override the default namespace within your variable's scope:
I think you also don't really need to set the default namespace to xhtml, so I believe you could remove that and not worry about namespaces at all (except for xsl and exsl).The test is failing because it's `/about.xhtml` in the template but `about` outside. You'd either need to add a name attribute to item to compare on or make it match the href.
That should make your thing work if I haven't fooled myself again. :)
I think you also don't really need to set the default namespace to xhtml
you are right. i removed it, and it works. typical "copy from stackoverflow" error. these namespaces are a mystery and not intuitive at all. i suppose most people don't notice that because it only applies to xml data within the stylesheet. most people won't have that so they won't notice an issue. the less the better.
for the other error, my mistake, duh! in my original example in https://news.ycombinator.com/item?id=44961352 i am comparing $current/@name to a hardcoded value, so if i want to keep that comparison i have to add that value to the nav-menu data. or use a value that's already in there.
i went with adding a name="about" attribute to the nav-menu because it keeps the documents cleaner: <document name="about"> just looks better, and it also allows me to treat it like an ID that doesn't have to match the URL which allows renaming/moving documents around without having to change the content. (they might go from about.xhtml to about/index.xhtml for example)
i am also probably going to use the document() function instead of exsl:node-set() because having the menu data in a separate file in this case is also easier to manage. it's good to know about that option though. being able to iterate over some local data is a really useful feature. i'll keep that around as an example.
the final piece of the puzzle was:
to put a separator between the items, but not after.that sorted, now it all works. thank you again.
btw, it's funny that we are turning hackernews into an xsl support forum. i guess i should write all that up into a post some day.
Nice. Fwiw I believe you can also use css for the separators if you've put them in a list:
If xslt survives maybe I should make a forum and/or wiki. Using xslt of course.Yeah, unfortunately the one criticism of XSLT that you can't really deny is that there's no information out there about how to use it, so beyond the tiny amount of documentation on MDN, you kind of have to just figure out your own patterns. It feels a little unfair though that it basically comes down to "this doesn't have a mega-corporation marketing it". That and the devtools for it are utterly broken/left in the early 00s for similar reasons. You could imagine something could exist like the Godbolt compiler explorer for template expansion showing the input document on the left and output on the right with color highlighting for how things expanded, but instead we get devtools that barely work at all.
You're right on the href; maybe there's not a slick/more "HTML beginner friendly" way to get rid of the <xsl:choose> stuff even in 3.0. I have no experience with 3.0 though since it doesn't work.
I get a little fired up about the XSLT stuff because I remember being introduced to HTML in an intersession school class when I was like... 6? XSLT wasn't around at that time, but I think I maybe learned about it when I was ~12-13, and it made sense to me then. The design of all of the old stuff was all very normal-human approachable and made it very easy to bite a little bit more off at a time to make your own personal web pages. "Use React and JSON APIs" or "use SSR" seems to just be giving up on the idea that non-programmers should be able to participate in the web too. Should we do away with top level HTML/CSS while we're at it and just use DOM APIs?
There were lots of things in the XML ecosystem I didn't understand at the time (what in the world was the point of XSDs and what was a schema and how do you use them to make web pages? I later came to appreciate those as well after having to work as a programmer with APIs that didn't have schema files), but the template expansion thing to make new tags was easy to latch onto.
devtools for it are utterly broken
right, that's a big issue too. when the xsl breaks (in this case when i use <xsl:apply-templates select="$nav-menu-items/item">) i get an empty page and nothing telling me what could be wrong. if i remove the $ the page works, and the apply-templates directive is just left out.
How do you format a raw XML file in the browser without XSLT?
instead of including a reference to the XSLT sylesheet apparently you can also include javascript: https://stackoverflow.com/a/16426395
That's only if the original document is an XHTML document that will have scripts loaded. Other XML documents, such as RSS feeds, will not have any support for JS, short of something silly like putting it in an iframe.
i didn't test it, but the stackoverflow answers suggested otherwise. are they wrong?
But can it transform / format the XML?
why should it not? once loaded it should find the XML in the DOM and transform that any way you like.
It solves the problem of not requiring a full turing machine with a giant API that has a history of actual exploits and not just FUD behind it.
i believe XSLT is touring complete, and regarding exploits, you rather want to read this: https://news.ycombinator.com/item?id=44910050
it turns out that because XSLT was largely ignored, it is full of security issues, some of which have been in there for decades.
so the reason XSLT doesn't have a history of exploits is because nobody used it.
>while there was enormous energy to improve JavaScript
What was the point of it though? People transpile from other languages anyway and pull megabytes of npm dependencies.
This question in analogous to what is the point of better CPUs when people use compilers/assemblers instead of writing binaries in an hex editor.
Community feedback is usually very ad hoc. Platform PMs will work with major sites, framework maintainers, and sometimes do discussions and polls on social sites. IOW, they try to go where the community that uses the features are, rather than stay on GitHub in the spec issues.
Although in this case, it seems more like they are trying to go where the community that uses the feature isn't.
[dead]
There isn't one. It's Google's web now. You should be thankful that you are still allowed to use it.
I think this post is useful where the thread author proposed some solutions to the people affected: https://github.com/whatwg/html/issues/11523#issuecomment-318...
The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.
I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.
Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.
The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.
> I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.
HTTP has already had this since the 90s. Clients send the Accept HTTP header indicating which format they want and servers can respond with alternative representations. You can already respond with HTML for browsers and XML for other clients today. You don’t need the browser to know how to do the transformation.
This is breaking the web though.
If they are so worried, then have the xslt support compiled to wasm and sandboxed.
This is not breaking the web, stop being so needlessly hyperbolic. XSLT use is absolutely tiny. If you removed it, >99.9% of the web wouldn’t even notice.
If we removed everyone named Jim Dabell from the world, the other 99% wouldn't even notice. They're absolutely tiny. Perhaps we should try doing that.
It certainly wouldn’t break the world. You are being needlessly hyperbolic.
You're equivocating; "Don't break the Web" means what it has always meant, but you're not-so-subtly suggesting it means something else. Stop being a waste of time.
Apart from that doesn’t really work for people who are statically hosting their RSS feeds etc.
You can use content negotiation with static websites too. Apache has mod_negotiation, for example.
Assuming you have access to server configuration. XML/XSLT works anywhere you can host a static page.
Most people are hosting static sites on GH pages, Vercel, Netlify, Cloudflare pages etc
I actually found that particular response to be quite disappointing. It should give pause to those advocating removal of XSLT that these three totally disparate use cases could already be gracefully handled by a single technology which is:
* side effect free (a pure data to data transformation)
* stable, from a spec perspective, for decades
* completely client-side
Isn't this basically an A+ report card for any attempt at making a powerful general tool? The fact that the suggested solution in the absence of XSLT is to toil away at implementing application-specific solutions forever really feels like working toward the wrong direction.
Purely out of curiosity, what are some websites that actually make use of XSLT?
Skechers used to :)
https://thedailywtf.com/articles/Sketchy-Skecherscom
Also world of warcraft used to.
Can’t think of recent examples though.
Many sitemaps and RSS feeds use XSL to seamlessly present human readable content.
You can include a "link" HTTP header similar to a link tag. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
This would work without special syntax in the XML file.
Any solution that requires any change to the websites affected, no matter how small, is not a solution at all. DO. NOT. BREAK. THE. WEB.
Ah how easy is it to bloviate when you're not actually the one having to maintain the web, huh?
Google doesn't have to maintain the web, they chose to. They also chose to make the web infinitely more complicated so that others are less likely to "compete" for that responsibility. You don't get to insert yourself into that position and then only reap the benefits without putting int the required effort.
> [T]he maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
... Largely because of lack of help from major users such as browsers.
Disclaimer: I work on Chrome and I have contributed a (very) small number of fixes to libxml2/libxslt for some of the recent security bugs.
Speaking from personal experience, working on libxslt... not easy for many reasons beyond the complexity of XSLT itself. For instance:
- libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.
- libxslt reaches into libxml and reuses fields in creative ways, e.g. libxml2's `xmlDoc` has a `compression` field that is ostensibly for storing the zlib compression level [1], but libxslt has co-opted it for a completely different purpose [2].
- There's a lot of missing institutional knowledge and no clear place to go for answers, e.g. what does a compile-time flag that guards "refactored parts of libxslt" [3] do exactly?
[1] https://gitlab.gnome.org/GNOME/libxml2/-/blob/ca10c7d7b513f3...
[2] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...
[3] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...
Sounds like libxslt needs more than just a small number of fixes, and it sounds like Google could be paying someone, like you, to help provide the necessary guidance and feedback to increase the usability and capabilities of the library and evolve it for the better.
Instead Google and others just use it, and expect that any issues that come up to be immediately fixed by the one or two open source maintainers that happen to work on it in their spare time. The power imbalance must not be lost on you here...
If you wanted to dive into what [3] does, you could do so, you could then document it, or refactor it so that it is more obvious, or remove the compile time flag entirely. There is institutional knowledge everywhere...
or, the downstream users who use it and benefit directly from it could step up, but websites and their users are extremely good at expecting things to just magically keep working especially if they don't pay for it. it was free, so it should be free forever, and someone set it up many moons ago, so it should keep working for many more magically!
// of course we know that, as end-users became the product, Big Tech [sic?] started making sure that users remain dumb.
Website operators are fine with how libxslt works now. It's browser vendors that want change.
You mean they are fine with expecting it to be maintained by browser vendors indefinitely for free.
Browser vendors aren't maintaining the web for fee, they are for profit corporations that have chosen to take on that role for the benefits it provides to them. It's only fair that we demand that they also respect the responsibilities that come with it. And we can also point out the hollowness about complaints of hardship due to having to maintain the web's legacy when they keep making it harder for independent browser developers by adding tons on new complexity.
Sure, of course, but unless funding is coming from users the economics won't change, because:
The vendors cite an aspect of said responsibility (security!) to get rid of an other aspect (costly maintenance of a low-revenue feature).
The web is evolving, there's a ton of things that developers (and website product people, and end-users) want. Of course it comes with a lot of "frivolous" innovation, but that's part of finding the right abstractions/APIs.
(And just to make it clear, I think it's terrible for the web and vendors that ~100% of the funding comes from a shady oligopoly that makes money by selling users - but IMHO this doesn't invalidate the aforementioned resource allocation trade off.)
> libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml2) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write systemic fixes.
I’m having trouble expressing this in a way that won’t likely sound harsher than I really want, but, uh, yes? That’s the fundamental difference between maintaining a part of the commons that anybody can benefit from and a subdirectory in a monorepo. The bazaar incurs coordination costs, and not being able to go and fix all the callers is one of them.
(As best as I can see, Chrome’s approach is largely to make everything a part of the monorepo, so maintaining a part of the commons may not be high on the list of priorities.)
This not to defend any particular ABI choice. Too often ABI is left to luck and essentially just happens instead of being deliberately designed, and too often in those cases we get unlucky. (I’m tempted to recite an old quote[1] about file formats, which are only a bit more sticky than public ABI, because of how well it communicates the amount of seriousness the subject ought to evoke: “Do you, Programmer, take this Object to be part of the persistent state of your application, to have and to hold, through maintenance and iterations, for past and future versions, as long as the application shall live?”)
I’m not even deliberately singling out what seems to me like the weakest of the examples in your list. It’s just that ABI, to me, is such a fundamental part of lib-anything that raising it as an objection against fixing libxslt or libxml2 specifically feels utterly bizarre.
[1] http://erights.org/data/serial/jhu-paper/upgrade.html
It's one thing if the library was proactively written with ABI compatibility in mind. It's another thing entirely if the library happens to expose all its implementation details in the headers, making it that much harder to change things.
When i first encountered the early GNOME 1 software back in the very late 1990s, and DV (libml author) was active, i was very surprised when i asked for the public API for a library and was told, look at the header files and the source.
They simply didn’t seem to have a concept of data hiding and encapsulation, or worse, felt it led to evil nasty proprietary hidden code and were better than that.
They were all really nice people, mind you—i met quite a few of them, still know some—and the GNOME project has grown up a lot, but i think that’s where libxml was coming from. Daniel didn’t really expect it to be quite so widely used, though, i’m sure.
I’ve actually considered stepping up to maintain libxslt, but i don’t know enough about building on Windows and don’t have access to non-Linux systems really. Remote access will only go so far on Windows i think, although it’d be OK on Mac.
It might be better to move to one of the Rust XML stacks that are under active development (one more active than the other).
No, it's the same in both cases. ABI stability is what every library should provide no matter how ugly the ABI is.
Former Mozilla and Google (Chrome team specifically) dev here. The way I see what you're saying is: Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT from the web platform, regardless of whether it's still being used. It's okay because someone from Mozilla brought it up.
Out of those three projects, two are notoriously under-resourced, and one is notorious for constantly ramming through new features at a pace the other two projects can't or won't keep up with.
Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?
This appeal to authority doesn't hold water for me because the important question is not 'do people with specific priorities think this is a good idea' but instead 'will this idea negatively impact the web platform and its billions of users'. Out of those billions of users it's quite possible a sizable number of them rely on XSLT, and in my reading around this issue I haven't seen concrete data supporting that nobody uses XSLT. If nobody really used it there wouldn't be a need for that polyfill.
Fundamentally the question that should be asked here is: Billions of people use the web every day, which means they're relying on technologies like HTML, CSS, XML, XSLT, etc. Are we okay with breaking something that 0.1% of users rely on? If we are, okay, but who's going to tell that 0.1% of a billion people that they don't matter?
The argument I've seen made is that Google doesn't have the resources (somehow) to maintain XSLT support. One of the googlers argued that new emerging web APIs are more popular, and thus more deserving of resources. So what we've created is a zero-sum game where any new feature added to the platform requires the removal of an existing feature. Where does that game end? Will we eventually remove ARIA and/or screen reader support because it's not used by enough people?
I think all three browser vendors have a duty to their users to support them to the best of their ability, and Google has the financial and human resources to support users of XSLT and is choosing not to.
Another way to look at this is:
Billions of people use the web every day. Should the 99.99% of them be vulnerable to XSLT security bugs for the other 0.01%?
That same argument applies to numerous web technologies, though.
Applied to each individually it seems to make sense. However the aggregate effect is kill off a substantial portion of the web.
In fact, it's an argument to never add a new web technology: Should 100% of web users be made vulnerable to bugs in a new technology that 0% of the people are currently using?
Plus it's a false dichotomy. They could instead address XSLT security... e.g., as various people have suggested, by building in the XSLT polyfill they are suggesting all the XSLT pages start using as an alternative.
depends entirely on which technologies are acctively addressing current and future vulnerabilities.
The vulnerabilities associated with native client-side XSLT are not in the language itself (XSLT 1.0) but instead are caused by bugs in the browser implementations.
Ps. The XSLT language is actively maintained and is used in many applications and contexts outside of the browser.
If this is the reason to remove and or not add something to the web, then we should take a good hard look at things like WebSerial/WebBluetooth/WebGPU/Canvas/WebMIDI and other stuff that has been added that is used by a very small percentage of people yet all could contain various security bugs...
If the goal is to reduce security bugs, then we should stop introducing niche features that only make sense when you are trying to have the browser replace the whole OS.
whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.
I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.
For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.
Even more so if you want to support people downloading the data and viewing it from a local file.
If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.
I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.
Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.
In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.
One option I'd suggest is instead of
we'd instead use a service worker script to process the data Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.
Using https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent/... it could respond to the XML being loaded with a transformed response, allowing it to process the XML similar to an XSLT.
You could even have a polyfill service worker that loads an XSLT and applies it to the XML.
Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.
Solutions have been proposed in that threads, including adding the XSLT polyfill to the browser (which would run it in the Javascript VM/sandbox).
If the usage/risk of XSLT is enough to remove it, you'd have to remove webusb, webbluetooth, webmidi, webxr, and countless more
Yes, please.
Isn't this something that could be implemented using javascript?
I don't think anyone is arguing that XSLT has to be fast.
You could probably compile libxslt to wasm, run it when loading xml with xslt, and be done.
Does XSLT affect the DOM after processing, isn't it just a dumb preprocessing step, where the render xhtml is what becomes the DOM.
It could be. The meaningful argument is over whether the javascript polyfill should be built into the browser (in which case, browser support remains the same as it ever was, they just swap out a fast but insecure implementation for a slow but secure one), or whether site operators, principally podcast hosts, should be required to integrate it into their sites and serve it.
The first strategy is obviously correct, but Google wants strategy 2.
As discussed in the GitHub thread, strategy two is fundamentally flawed because there’s no other way to make an XML document human readable in today’s browsers. (CSS is close but lacking some capabilities)
So site operators who rely on this feature today are not merely asked to load a polyfill but to fundamentally change the structure of their website - without necessarily getting to the same result in the end.
So the Safari developers are overworked/under-resourced, but Google somehow should have infinite resources to maintain things forever? Apple is a much bigger company than Google these days, so why shouldn't they also have these infinite resources? Oh, right, its because fundamentally they don't value their web browser as much as they should. But you give them a pass.
One is a browser. The other is an ad delivery platform which requires a more strategic active development posture.
The funny thing is that apple has a huge ad business so I don't know which browser you mean.
> but Google somehow should have infinite resources to maintain things forever?
Google adds 1000+ new APIs to the web platform a year. They are expected to be supported nearly forever. They have no qualms adding those.
Many such cases. Remember when the Chrome team seriously thought they could just disable JavaScript alert() overnight [1][2] and not break decades of internet compatibility? It still makes me smile how quietly this was swept under the rug once it crashed and burned, just like how the countless "off-topic" and "too emotional" comments on Github said it would.
Glad to see the disdain for the actual users of their software remains.
[1] https://github.com/whatwg/html/issues/2894 [2] https://www.theregister.com/2021/08/05/google_chrome_iframe/
(FWIW I agree alert and XSLT are terrible, but that ship sailed a long time ago.)
Bring back VRML!
Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)
Even in its heyday, XSLT seemed like an afterthought. Probably there are a handful of legacy corporate users hanging on to it for dear life. But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...
> But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...
Flash was not part of the web platform. It was a plugin, a plugin that was, over time, abandoned by its maker.
FTP was not part of the web platform. It was a separate protocol that some browsers just happened to include a handler for. If you have an FTP client, you can still open FTP links just fine.
Non-HTTPS sites are being discouraged, but still work fine, and can reasonably be expected to continue to work indefinitely, though they are likely to be discouraged a bit harder over time.
XSLT is part of the web platform. And removing it breaks various things.
I don't think that distinction makes much of a difference for the users and devs affected...
Flash was the best part of the web, though.
Not if you were on a non-mainstream platform. Like some Linux, or oh my gawd NetBSD!1!!
I couldn't be more happy about its demise.
XSLT was awesome back in the day. You could get a block of XML data from the server, and with a bit of very simple scripting, slice it, filter it, sort it, present summary or detail views, generate tables or forms, all without a server round trip. This was back in IE6 days, or even IE5 with an add-on.
We built stuff with it that amazed users, because they were so used to the "full page reload" for every change.
> Probably there are a handful of legacy corporate users hanging on to it for dear life.
Like more or less everyone that hosts podcasts. But the current trend is for podcast feeds to go away, and be subsumed into Spotify and YouTube.
Do people consume RSS feeds directly via XSLT? Not through apps and such that subscribe to the feed?
This came up in some of the comments: https://github.com/whatwg/html/issues/11523#issuecomment-315... if you click the links instead of copy/pasting into your reader you get a page full of raw XML. It's not harmful or anything but it's not a great look. You can't really expect your users to just never click on your links, that's usually what links are for.
> Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)
And those that would replace you might care more for the web rather than the next performance review.
+1. I worked on an internal corporate eCommerce in 2005 built entirely on DOM + XSLT to create the final HTML. It was an atrocious pain in the neck to maintain (despite being server side so the browser never had to deal with the XSLT). Unless you still manipulate XML and need to transform it in various other formats through XSLT/XSL-FO, I don’t see why anyone would bother with it. It always cracks me up when people « demand » support for features hardly ever used for which they won’t spend a dime or a minute to help
When I see "reps from every browser agree" my bullshit alarm immediately goes off. Does it include unanimous support from browser projects that are either:
1. not trillion dollar tech companies
or
2. not 99% funded from a trillion dollar tech company.
I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare. And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
The reckless, infinite scope of web browsers https://drewdevault.com/2020/03/18/Reckless-limitless-scope....
It’s worth noting that since that article was written, the Ladybird browser has made a lot of progress with their new browser engine.
https://ladybird.org
Well, every browser engine that is part of WHATWG. That's how working groups... work. The current crop of "not Chrome/Firefox/Webkit" aren't typically building their own browser engines though. They're re-skinning Chromium/Gecko/Webkit.
This makes the job of smaller engines like Servo and Ladybird a lot easier.
> Does it include unanimous support from browser projects
They could continue supporting XSLT if they wanted.
It's not a huge conspiracy, but it is worthwhile to consider what the incentives are for people from each browser vendor. In practice all the vendors probably have big backlogs of work they are struggling to keep up with. The backlogs are accumulating in part because of the breakneck pace at which new APIs and features are added to the web platform, and in part because of the unending torrent of new security vulnerabilities being discovered in existing parts of the platform. Anything that reduces the backlog is thus really appealing, and money doesn't have to change hands.
Arguably, we could lighten the load on all three teams (especially the under-resourced Firefox and Safari teams) by slowing the pace of new APIs and platform features. This would also ease development of browsers by new teams, like Servo or Ladybird. But this seems to be an unpopular stance because people really (for good reason) want the web platform to have every pet feature they're an advocate for. Most people don't have the perspective necessary to see why a slower pace may be necessary.
>I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare.
This has never ever made sense because Mozilla is not at all afraid to piss in Google's cheerios at the standards meetings. How many different variations of Flock and similar adtech oriented features did they shoot down? It's gotta be at least 3. Not to mention the anti-fingerprinting tech that's available in Firefox (not by default because it breaks several websites) and opposition to several Google-proposed APIs on grounds of fingerprinting. And keeping Manifest V2 around indefinitely for the adblockers.
People just want a conspiracy, even when no observed evidence actually supports it.
>And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
That's basically true whether incidentally or on purpose.
Controlled opposition is absolutely a thing, and to think that people at trillion dollar companies wouldn't do this is naive. I'm not claiming for a fact that mozilla is controlled opposition, i'm just saying it's very feasible that it could be, and i look for signs of it.
You give examples of things they disagree on, and i wouldn't refute that. However i would say that google is going to pick and choose their battles, because ultimately things they appear to "lose on" sort of don't matter. fingerprinting is a great example - yes, firefox provides it, but it's still largely pretty useless, and its impact is even more meaningless because so few people use it. if you have javascript on and arent using a VPN, chances are your anti-fingerprinting isn't actually doing much other than annoying you and breaking sites.
the only real thing to be used for near-complete-anonymity is Tor, but only when it's also used in the right way, and when JavaScript is also turned off. And even then there are ways it could and probably has failed.
> Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT
Did anybody bother checking with Microsoft? XML/XSLT is very enterprisey and this will likely break a lot of intranet (or $$$ commercial) applications.
Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy? It's the equivalent of the crazy cat hoarder who wormed her way onto the HOA board speaking for everyone else. No.
There are countries like Germany where Firefox still has around 10% market share [0], or closer to 20% on the desktop, only second behind Chrome [1]. Not exactly irrelevant.
[0] https://gs.statcounter.com/browser-market-share/all/germany
[1] https://gs.statcounter.com/browser-market-share/desktop/germ...
It has long seemed like Firefox is likely doing Google's bidding? That could be a reason why they're given a full vote?
/abject-speculation
> Did anybody bother checking with Microsoft?
> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?
The juxtaposition of these two statements is very funny.
Firefox actually develops a browser, Microsoft doesn't. That's why Firefox gets a say and Microsoft doesn't. Microsoft jumped off the browser game years ago.
No, changing the search engine from Google to Bing in chromium doesn't count.
Ultimately, Microsoft isn't implementing jack shit around XSLT because they aren't implementing ANY web standards.
You make it sound like those two thoughts are incompatible in juxtaposition, but they are in fact perfectly consistent, even if you were correct that Microsoft isn't building anything, as the premise is that users matter more than elbow grease. The reason why you'd want to ask Microsoft is the same reason why you might not bother consulting Firefox: because Microsoft has actual users they represent, and Firefox does not.
This is not true. Microsoft is participating in standards and implementing them in Blink.
"Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?"
There was not really a vote in the first place and FF is still dependant on google. Otherwise FF (users) represants a vocal and somewhat influental minority, capable of creating shitstorms, if the pain level is high enough.
Personally, I always thought XSLT is somewhat weird, so I never used it. Good choice in hindsight.
Maybe because Edge is just a wrapper around Blink?
So Microsoft cucked by Google and Mozilla being a puppet regime of Google at this point.
Seems like a rigged game to me.
Yes it's a wrapper but Microsoft represents a completely different market with individual needs/wants.
If it wasn't for Apple (who doesn't care about enterprise) butting in, the browser consortium would be reminiscent of the old Soviet Union in terms of voting.
> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?
Ironic, considering the market share of XSLT.
>who's going to tell that 0.1% of a billion people that they don't matter?
This is also not a fair framing. There are lots of good reasons to deprecate a technology, and it doesn't mean the users don't matter. As always, technology requires tradeoffs (as does the "common good", usually.)
It sounds like Mozilla has problems despite the quite lucrative "notoriously under-resourced" $400M - $500M a year Google spends on FF a year
Is there a spending on junk projects issue with Firefox?
https://galaxy.ai/youtube-summarizer/is-mozilla-wasting-mone...
And Google even has a doc literally saying that you shouldn't break the web even if a small number of sites use a feature: https://news.ycombinator.com/item?id=44956267
> Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?
Because otherwise everybody has to repeat same work again and again, programming how - instead of focusing on what, declarative way.
Then data is not free, but caged by processing so it can't exist without it.
I just want data or information - not processing, not strings attached.
I don't see any need to run any extra code over any information - except to keep control and to attach other code, trackers etc. - just, I'm not Google, no need to push anything (just.. faster JS engine instead of empowering users somehow made a browser better ? (no matter how fast, you can't) - for what ? (of what I needed) - or instead of something, that they 'forgot' with a wish they could erase it ?)
> 0.1% of a billion people
Probably more like 0.0001% these days. I doubt 0.1% of websites ever used it.
0.02% of public Web pages, apparently, have the XSLT processing instruction in them, and a few more invoke XSLT through JavaScript (no-one really knows how many right now).
It’s likely more heavily used inside corporate and governmental firewalls, but that’s much harder to measure.
[dead]
By your argument, once anything makes it in, then it can't be removed. Billions of people are going to use the web every day and it won't stop. Even the most obscure feature will end up being used by 0.1% of users. Can you name a feature that's supported by all browsers that's not being used by anyone?
Yes. That is exactly how web standards work historically. If something will break 0.1% of the web it isn't done unless there are really really strong reasons to do it anyway. I personally watched lots of things get bounced due to their impact on a very small % of all websites.
This is part of why web standards processes need to be very conservative about what's added to the web, and part of why a small vocal contingent of web people are angry that Google keeps adding all sorts of weird stuff to the platform. Useful weird stuff, but regardless.
“That is exactly how web standards work…”
Says who? You keep mentioning this 0.1% threshold yet…
1. I can’t find any reference to that do you have examples / citations?
2. On the contrary here’s a paper that proposes a 3x higher heuristic: https://arianamirian.com/docs/icse2019_deprecation.pdf
3. It seems there are plenty of examples of features being removed above that threshold NPAPI/SPDY/WebSQL/etc.
4. Resources are finite. It’s not a simple matter of who would be impacted. It’s also opportunity cost and people who could be helped as resources are applied to other efforts.
E.g. Google said in their document https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
--- start quote ---
As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!
--- end quote ---
Read the full doc. They even give examples when they couldn't remove a feature impacting just 0.0000008% of web views.
Also, according to Chrome's telemetry, very, very few websites are using it in practice. It's not like the proposal is threatening to make some significant portion of the web inaccessible. At least we can see the data underlying the proposal here.
Sadly, I just built a web site with HTMX and am using the client-side-templates extension for client-side XSLT.
>very, very few websites
Doesn't include all the corporate web sites that they are probably blocked from getting such telemetry for. These are the users that are pushing back.
Does that library use the browser's xslt?
I'm curious as to the scope of the problem, if html spec drops xslt, what the solutions would be; I've never really used xslt (once maybe, 20 years ago). In addition to just pre-rendering your webpage server-side, I assume another possible solution is some javascript library that does the transformations, if it needed to be client-side?
Found a js-only library, so someone has done this before: https://www.npmjs.com/package/xslt-processor
Looking at the problem differently. Say some change would make Hacker News unusable, the data would support this and show that it practically affects no one.
First, we are an insignificant portion of the web, and it's okay to admit that.
Second, if HN were built upon outdated Web standards practically nobody else uses, I'm sure YCombinator could address the issue before the deadline (which would probably be at least a year or two out) to meet the needs of its community. Every plant needs nourishment to survive.
It's not OK for the Google & co to chip away at "insignificant" portions of the web until all that's left are big corporate run platforms.
First, you're assuming that those portions of the Web won't evolve in order to survive. Second, you're ascribing a motive to Google that you assume (probably falsely) that they possess.
The people writing, and visiting websites that rely on XSLT are the same users that disable or patch out telemetry.
A LOT of internal corpo websites use XSLT.
1. Chrome telemetry underreports a lot of use cases
2. They have a semi-internal document https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS... that explicitly states: small usage percentage doesn't mean you can safely remove a feature
--- start quote ---
As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial.
There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!
--- end quote ---
3. Any feature removal on the web has to be a) given thorough thought and investigation which we haven't seen. Library of congress apparently uses XSLT and Chrome devs couldn't care less
>Chrome telemetry underreports a lot of use cases Sure; in that case, I would suggest to the people with those use cases that they should stop switching off telemetry. Everyone on HN seems to forget telemetry isn't there for shits and giggles, it's there to help improve a product. If you refuse to help improve the product, don't expect a company to improve the product for you, for free.
Hmm, I don't see the LOC listed here among the top sites: https://chromestatus.com/metrics/feature/timeline/popularity... - where are you seeing the Library of Congress as impacted?
This was mentioned in the discussions and are an easy search away. Which means that googlers in their arrogance didn't do any research at all and that their counter underrepresents data as explicitly stated in their own document
https://www.loc.gov/standards/mods/mods-conversions.html
https://www.loc.gov/preservation/digital/formats/fdd/fdd_xml...
And then there's Congress: https://simonwillison.net/2025/Aug/19/xslt/
The library of congress examples appear to be using server side xslt not client side. Thus they are not affected by this deprecation.
Before calling people arrogant you should read your own links.
[The congress example is legit]
Here is an example of a URI using client-side XSLT in the library of congress. They are definitely using this feature.
https://www.loc.gov/standards/mets/profiles/00000016.xml
Before calling people arrogant you should validate your own arrogance.> [The congress example is legit]
So let me get this straight. The Congress example is legit. Multiple other cases discussed here: https://github.com/whatwg/html/issues/11523 are legit
And yet it's not the Googlers and other browser implementers who didn't do even a modicum or research who are arrogant, but me, because I made a potential mistake quickly searching for something on my phone at night?
Do you honestly believe none of these will be addressed before the deadline passes?
Ok thanks, we've dechromed the title above. (Submitted title was "Chrome intends to remove XSLT from the HTML spec".)
The implementations are owned by the implementers. Who owns the actual standard, the implementers or the users?
I think trying to own a web standard is like trying to own a prayer. You can believe all you want, but it's up to the gods to listen or not...
As for any standard, the implementers ultimately own it. Users don't spend resources on implementing standards, so they only get a marginal say. Do you expect to contribute to the 6G standards, or USB-C, too?
Own is not really the right word for an open source project. In practice it is controlled by Apple, Google, Microsoft and Mozilla.
> Even so, give the cross-vendor support for this is seems likely to proceed at some point.
Yup. Just like the removal of confirm/prompt that had vendor support and was immediately rushed. Thankfully to be indefinitely postponed.
Here's Google's own doc on how a feature should be removed: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
Notice how "unilateral support by browser vendors" didn't even look at actual usage of XSLT, where it's used, and whether significant parts would be affected.
Good times.
The responses of some folks on this thread reminds me of this:
https://xkcd.com/1172/
That's more a joke about people coming to rely on any observable behavior of something, no matter how buggy or unintentional.
Here's we're talking about killing off XSLT used in the intended, documented, standard way.
So if in reading the two threads correctly essentially Google asked for feedback, essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?
The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.
And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."
And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.
Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.
https://github.com/whatwg/html/issues/11523
>essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?
this is a perfectly reasonable course of action if the feedback is "please don't" but the people saying "please don't" aren't people who are actually using it or who can explain why it's necessary. it's a request for feedback, not just a poll.
> people who are actually using it
I'd presume that most of those people are using it in some capacity, it's just that their numbers are seen as too minor to influence the decision.
> explain why it's necessary
No feature is strictly necessary, so that's a pretty high standard.
> I'd presume that most of those people are using it in some capacity, it's just that their numbers are seen as too minor to influence the decision.
I think the idea of that is reasonable. If I used XSLT on my tiny, low-traffic blog, I think it's reasonable for browser devs to tell me to update my code. Even if 100 people like me said the same thing, that's still a vanishingly small portion of the web, a rounding error, protesting it.
I'd expect the protests to be disproportionate in number and loudness because the billion webmasters who couldn't care less aren't weighing in on it.
Now, I'm not saying this with a strong opinion on this specific proposal. It doesn't affect me either way. It's more about the general principle that a loud number of small webmasters opposing the move doesn't mean it's not a good idea. Like, people loudly argued about removing <marquee> back in the day, but that happened to be a great idea.
True, a small number of vocal opponents does not automatically make something a bad idea. But in these cases of compatibility, especially with something as big as the Web, the vast majority of those affected who do care will be completely silent. There's no hotline to call up the entire world and tell them to update their code.
(And if you did want to tell the entire world to update their code, and have any chance of them following through with it, you'd better make sure there's an immediate replacement ready. Log4Shell would probably still be a huge issue today if it couldn't be fixed in place by swapping out jar files.)
> If I used XSLT on my tiny, low-traffic blog, I think it's reasonable for browser devs to tell me to update my code.
I _do_ use XSLT on my tiny, low-traffic blog, and I _don't_ think that it's reasonable for browser devs to tell me to update my code.
Also, it's real easy to manufacture a situation where adoption of a thing is low when the implementation is incomplete and hasn't had significant updates for decades.
The web has grown a thousand fold over those decades, in spite of no support for XSLT. No browser has failed (or gained market traction) by missing support for (or adding more support for) XSLT. It's an irrelevancy, even if you did like it once.
Lots of content was lost when Flash was removed as well - much, much more than the amount of content that will be lost if XSLT is removed. And yet the web continued.
The web is straight up a weaker, worse more closed off experience post-flash, so I'm not sure that this engenders the kind of response you had envisioned but now I'm worried about xslt.
... in a diminished state.
You're literally commenting on a thread full of those explanations that were handwaved away.
Part of the reason I stopped was lack of higher than 1.0 in browsers.
The other reason is that SVG took a very long time to get good, and when it did I wanted to use XSL and SVG together.
Now SVG has got good and they are removing it :(
Well, no; the reasonable course of action is to solicit feedback from the right people instead.
yeah! they should only ask for feedback from people who love XSLT, other people's opinion doesn't matter.
Google's own document says numbers don't show the full picture: https://news.ycombinator.com/item?id=44956267
They didn't even do the tiniest bit of research, as people in the discussions clearly showed, and there are high impact sites that would be affected by this including Congress and Library of Congress: https://news.ycombinator.com/item?id=44958929
It would be incredible if we could pull it into the javascript/wasm sandbox and get xslt 3.0 support. The best of both worlds, at the cost of a performance hit on those pages, but not a terrible cost.
There's even a JS implementation of XSLT 3.0 already (SaxonJS).
That's pretty cool, its too bad the license is a bit confusing about whether bundling with Chrome or Firefox would be permissible under the license.
Not really, because that would add a dependency on Javascript whereas, at the moment, XSLT works without Javascript enabled.
Not necessarily. The idea is that this is browser-internal, so presumably it would still work even if JS from external sources is disabled.
No. The idea is that website authors do the work. The proposal suggests the browsers wholesale remove support and forget about it.
https://github.com/whatwg/html/issues/11523#issuecomment-315...
Oh, I understand that that's the WHATWG proposal. I'm talking about webstrand's idea.
This is clearly the Right Thing. So what do you suppose the chance of it happening is?
It comes with the XML territory that things have versioned schemas and things like namespaces, and can be programmed in XSLT. This typically means that integrations are trivial due to public, reliable contracts.
Unlike your average Angular project. Building on top of minified Typescript is rather unreasonable and integrating with JSON means you have a less than reliable data transfer protocol without schema, so validation is a crude trial and error process.
There's no elegance in raw XML et consortes, but the maturity of this family means there are also very mature tools so in practice you don't have to look at XML or XSD as text, you can just unmarshal it into your programming language of choice (that is, if you choose a suitable one) and look at it as you would some other data structure.
> all the feedback said "no, please don't"
Thread was already locked due vitriol, insults, and general ranting before I had the chance to comment to say I felt it was a good idea. Also, "this is a good idea" is not really the sort of things people tend to comment, so it will always be biased towards people who disagree. What "all feedback" said on that thread is basically meaningless – it's not a vote.
The fastest way to be dismissed is to be a dick. People were massive dicks so they were dismissed. You all made your bed, so now you have lie in it.
The vast majority of feedback on the GitHub issue was respectful — unless you consider opposing the proposal disrespectful.
There’s not nearly enough comments for “vast majority” to be a useful descriptor, and I saw a significant number of uncivil, rude comments.
Google tells you what they're going to do to the web with a question mark on the end.
It's called uptalking.
Web is for all practical purposes ChromeOS, but then people complain about Apple not playing ball.
> ChromeOS is for all practical purposes, the web.
Fixed that typo for you.
> > ChromeOS is for all practical purposes, the web
I'm very practically using Debian Linux on ChromeOS to develop test and debug enterprise software. I even compile and run some native code. It is very much more than just the web.
That is a VM, and actually maybe eventually it will be on top of Webassembly, how things are going.
> That is a VM...
So is WSL on Windows. I wouldn't call Windows "just the web".
There's also nothing stopping me from building and running local desktop GUI software on the VM.
In fact, a VM is better in that I can back up and restore the image easily.
WSL and other VMs are the Year of Desktop Linux finally coming true, nothing to do with Web.
> WSL and other VMs are the Year of Desktop Linux finally coming true, nothing to do with Web.
Just like the Linux VM on ChromeOS.
Which eventually will be a VM on top of WebAssembly, given the hype.
https://webvm.io/
Breaking the fundamental promise of the HTML spec is a big deal.
The discussions don't address that. That surprises me, because these seem to be the people in charge of the spec.
The promise is, "This is HTML. Count on it."
Now it would be just, "This is HTML for now. Don't count on it staying that way, though."
Not saying it should never be done, but it's a big deal.
They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.
So what they're really proposing is to cut off the web's long tail.
(Just want to note: The list of long-tail web technologies will continue to grow over time... we can expect it to grow roughly in proportion to the rate at which web technologies were added around 20 years in the past. Meaning we can expect an explosion of long-tail web technologies soon enough. We might want to think carefully about whether the people currently running the web value the web's long tail the way we would like.)
Nothing lasts forever, and eventually you have to port, emulate, archive or otherwise deal with very old applications / media. You see this all over the place: physical media, file formats, protocols, retro gaming, etc.
There's a sweet spot between giving people enough time and tools to make a transition while also avoiding having your platform implode into a black hole of accumulated complexity. Neither end of the spectrum is healthy.
I can still run windows applications that are decades old. If you don't want to support legacy stuff, don't insinuate yourself into global standards.
If this was just Android that would be an issue between Google and their developers/users, but this is everybody.
WHATWG broke this quasi-officially when they declared HTML a "Living Standard". The HTML spec is not a standard to be implemented anymore, it's just a method of coordinating/announcing what the browser vendors are currently working on.
(For the same reason, they dropped the name HTML5 and are only talking about "HTML". Who needs version numbers if there is no future and no past anyway?)
https://whatwg.org/faq#living-standard https://github.com/whatwg/html/blob/main/FAQ.md#html-standar...
Your FAQ stresses the importance of backwards compatibility multiple times.
Seems hard to square removing XSLT with that.
Yeah, that was my impression about their process. They replaced formalized compatibility guarantees through versions with "trust me, bro".
To be completely fair, looking over the lines removed by the PR, there don't appear to be any normative statements requiring HTML handling XSLT unless I missed one.
I get that people are more reacting to the prospect of browsers removing existing support, but I was pretty surprised by how short the PR was. I assumed it was more intertwined.
Their explicit intent is to generally remove XSLT from browsers.
If this was just about, e.g., organizing web standards docs for better separation of concerns, I think a lot of people would be reacting to it quite differently.
> They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.
That's a concise way to put it. IMHO this is also the main problem of the standard.
However I think XSLT isn't only long tail but also a curiosity with just academic value. I've being doing some experimentation and prototyping with XSLT while it was still considered alive. So even if you see some value in it, the problems are endless:
* XSLT is cumbersome to write and read
* XML is clunky, XSLT even more so
* yes there's SLAX, which is okay-ish but it becomes clear very fast that it's indeed just Syntax sugar
* there's XSLT 2.0 but there's no software support
* nobody uses it, there's no network effect in usage
I think a few years ago I stumbled upon a CMS that uses it and once I accidentally stumbled upon a Website that uses XSLT transformation for styling. That's all XSLT I ever saw in the wild being actually used.
All in all XSLT is a useless part of the way to large long tail preventing virtually everyone from writing spec compliant web browser engines.
> The promise is, "This is HTML. Count on it."
I think after HTML4 and XHTML people saw that a fully rigid standard isn't viable, so they made HTML5 a living standard with a plethora of working groups. Therefore the times where this was ever supposed to be true are long over anyway.
So indeed the correct way forward would be to remove more parts of a long tail that's hardly in use and stopping innovation. And instead maybe keeping a short list of features that allow writing modern websites.
(Also nobody is stopping anyone from using XSLT as primary language that compiles to HTML5/ES5/CSS)
There's a perverse irony that Google is as responsible as anybody for cramming a crazy amount of new stuff into the HTML/CSS/browser spec that everybody else has to support forever.
If they were one of the voices for "the browser should be lightweight and let JS libs handle the weird stuff" I would respect this action, but Google is very very not that.
Previously:
Should we remove XSLT from the web platform? – 4 days ago (89 comments):
https://news.ycombinator.com/item?id=44909599
XSLT – Native, zero-config build system for the Web – 27th June 2025 (328 comments):
https://news.ycombinator.com/item?id=44393817
Also related, now flagged:
https://news.ycombinator.com/item?id=44949857
Google is killing the open web, today, 127 comments
Why flagged? The post was reasonable.
Probably labeling a removal of a format (which is somewhat niche anyway) as "killing the open web" was a bit hyperbolic and not entirely warranted in this case.
Imagine that tomorrow, Google announces plans to stop supporting HTML and move everyone to its own version of "CompuServe", delivered only via Google Fiber and accessible only with Google Chrome. What headline would you suggest for that occasion? "Google is killing the open web" has already been used today on an article about upcoming deprecation of XSLT format.
No need, with exception of Safari, Web is ChromeOS already.
All the other alternatives are meaningless, including Firefox.
I am one of the few folks on my team that still uses Firefox, all our projects dropped support for it like 5 years ago.
Wow, you’re really pushing this Web=ChromeOS nonsense. Want to support that with something more than your own isolated anecdote?
Hard to find these days, but it remindes me of this [0]:
> "- Google had a plan called "Project NERA" to turn the web into a walled garden they called "Not Owned But Operated". A core component of this was the forced logins to the chrome browser you've probably experienced (surprise!)"
To "not own but operate" seems to go into the direction of the parent comment.
Also this: https://news.ycombinator.com/item?id=28976574
[0]: https://web.archive.org/web/20211024063021/https://twitter.c...
Chrome and Electron market share, easy to find out.
E.g. Google releasing through dozens of Chrome-only APIs with hardly a spec, and then expecting everyone to support the "standards".
Every discussion about "Safari holding back the web" on HN are about 99% about Google-only non-standards that both Safari and Firefox oppose.
There are multiple "works only in Chrome" websites, many of them regularly published on HN.
I agree. I think the article isn't really about the what but about the how. Which does appear to be rather questionable.
Right. If anything, it's the opposite: removing XSLT reduces the complexity of existing browsers, allowing new ones to catch up faster.
To me it seems that some people just really like using XSLT, and don't want it gone. Which is fair, but it also has nothing to do with the web's openness - yes, Google has far too much power, but XSLT isn't helping.
I don't think that would be the reason, as mods regularly change headlines of otherwise fine discussion threads instead of killing them.
> Probably labeling a removal of a format (which is somewhat niche anyway) as "killing the open web" was a bit hyperbolic and not entirely warranted in this case.
Incorrect on three counts. That article lists a bunch of useful technologies that were rejected at WHATWG with unconvincing reasons against massive public protests. It wasn't just labeling the removal of a format - that's a misrepresentation. The second is your characterization of calling XSLT niche. The article makes a case for why it is like that why it shouldn't be so. It's niche because it is neglected by the browser devs themselves. It hasn't been updated to the latest standard in a long time and it isn't maintained well enough to avoid serious bugs. And finally the third - "killing the open web" being a hyperbole. I don't even know where to start. There was a joke that web standards are proposed by someone from Google, reviewed and cleared from someone else from Google and finally approved by someone from Google. We saw this in action with WEI (The only reason for its partial rollback being the unusual attention and the massive backlash they faced from the wider tech community and mainstream media - including ours). At this point the public discussion there is just a farce. I don't know how many times this keeps repeating. That article shows many examples of this. Let me add my own recollections of the mockery to the mix - inclusion of EME and the rejection of JPEG-XL (technically not a part of the standard, but it is in a manner of speaking). It doesn't even resemble anything open.
I will be surprised if this comment doesn't receive a ton of negative votes. But there is no point in being a professional and in being here, if I'm unwilling to oppose this in public interest. The general conduct of WHATWG antithetical to public interest and are meant to escape the attention of the non-tech public. And even the voice of the savvy public is ignored repeatedly and contemptuously. It's not difficult to identify the corruptive influences of private commercial interests on these standards - EME and WEI being the tip of the iceberg. And let's not ignore the elephant in the room. It getting harder by the day to use a browser (web engine to be more precise) of your choice. In this context, the removal of XSLT isn't just a unilateral decision (please don't quote Firefox, Safari or Edge. Their interdependence is nothing short of a cabal at this point), its justification is based on problems that they themselves created.
Again expecting to be downvoted, it's hard to miss the patterns - arguments against XSLT that ignore the neglect that lead to it, and the dismissal of public comments (then why discuss it where anyone can read and post? why bill it as open?). The same happened with SMIL, JPEG XL,... It's tense to suggest attempts to drown out the opposition (I know it has a name. But that's enough trigger some), even if there are sufficient reasons to suspect it. But the flagging of that other article is a blatant indicator of that. Nothing in that article is factually false or remotely hyperbolic. Many of us are first hand witnesses of the damages and concerns it raises. The article is a good quality aggregation of the relevant history. Who is so inconvenienced by that? The only reason I can think of is the zeal to censor public interest opinions. Is the hubris in the group issue tracker spreading to public tech fora now? Conduct like this makes me lose hope that the web platform will ever be the harbinger of humanity's progress that it once promised to be. Instead it's turning out to be another slow motion casualty of unbridled greed.
PS: The flag has been cleared by the admins. But their (!admin) intent is unmistakable.
Users flagged it. We can only guess why users flag things. Perhaps it was the baity title.
I've taken the flags off that post now.
Respectfully, there is nothing baity about that title. The body of that article justifies it. XSLT is only about the last third of it.
These things land differently with different readers, of course, but "Google is killing the open web" does seem pretty baity to me. The combo of grand-claim and something-to-get-mad-about usually is. It doesn't take too large a set of provoked readers to get a large enough set of provoked commenters to bump a thread into flamewar mode.
Please take this as a point of discussion rather than as an argument. That title is something that I and many others would have come up with on our own without needing any provocation. In fact, the exact same thing has been said numerous times independently all over the net. There are so many instances that justify the assertion that you could make a very long list with the relevant HN stories alone. But that isn't the point of this reply.
The way I see it, any general or sweeping accusation against an entity may be construed as clickbait or too provocative for HN, even if the content backs it up sufficiently. But at what point are you going to draw the line where you consider the accusations to be credible enough to warrant such a scathing crticism? It's not as if these entities are renowned for their ethical conduct or even basic decency regarding the commons. Heated public lash back is often the only avenue they leave us. Case in point, I hope you remember the stand that the HN crowd took against WEI. Make no mistake, such discussions here don't go unnoticed. The talking points here often influence the public discourse, including by mass media. That's why there is such a fierce fight to control the narrative here.
I respect your right to your opinion. But this is essentially a political subject. And there is no getting around the fact that you cannot divorce politics from technology, or from any relevant subject for that matter. If that's considered as flame war, then I guess flame wars are an unavoidable and normal part technical discourse. It isn't personal (and no personal attacks should be involved), but the stakes are high enough for the contestants (often of high monetary nature). Attempts to curb such heated discourse will result in two serious consequences. The first is that you will give one or often both sides (ironically), the impression that HN is a place to amplify certain narratives without a balanced take. Secondly, you'll unintentionally and indirectly influence the outcome outside of HN. From my perspective, that leaves you in an unenviable predicament of such serious decisions.
So I implore you to consider these matters as well while taking such decisions. Especially to ensure that your personal biases don't influence what you consider as click and flame baits. From my personal experience, I know that you're putting in the utmost care, diligence and sincerity in those matters. But it's possible that the pressure to avoid controversies, fights and bad blood might have shifted your Overton window too far into the cautious territory over time. Probably a good yard stick is to see if the flamewar is important enough and whether it avoids personal harm (physical and emotional). I hope you'll consider this opinion when you make similar determination in the future. Regards!
When I go through these points I don't think we're disagreeing much! It seems more of a difference in style. For examples:
HN doesn't lack for criticism of the tech BigCos. If it's true that HN influences the public discourse (which I doubt, but let's assume it does), all that influence was gained by being the same HN with the same bookish* titles and preference to avoid flamewars as we're talking about here.
I agree, politics can't be divorced from the topics discussed on HN, and it isn't (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). That's not necessarily flamewar, though such topics are more likely to turn flameward.
Yes, many people have the impression that HN is biased, pushing one point of view over another, etc. But people will have that impression regardless. It's in the eye of the beholder, and there are many angry beholders, so we get accused of every bias you can think of. This is baked into the fundamentals of the site.
I don't think moderators' personal tastes are all that intertwined with issues like baity titles. For example, I like Lisp but if someone posted "Lisp crushes its enemies into execrable dust", I'd still edit that title to "Lisp macros provide a high degree of expressiveness" or some representative sentence from the article.
* pg's word about how he wanted the HN frontpage to be
[flagged]
Honestly, the guidelines must also include a clause prohibiting those activities. Sometimes the pattern is overwhelming. But it's prohibited to complain about it. Not an ideal situation. Hope you'll give it a serious thought.
Those activities are certainly prohibited. I don't think we don't need a guideline to say that though.
The HN guidelines don't list everything that's prohibited. To publish such a list would be to imply that everything not on the list is ok. That would be a big mistake! It would be carte blanche to the entire internet to find loopholes and wreak havoc with them.
> Sometimes the pattern is overwhelming.
The trouble is that in many cases it feels like such a pattern—and the feeling can be super convincing—yet there turns out to be no evidence for it. Perceptions are awfully unreliable about this.
We ask people not to post about these things in the threads, not to imply that actual astroturfing etc. is at all ok, but because unfounded comments about it vastly outnumber well-founded comments. Worse, they have a way of proliferating and taking over the threads.
Keep in mind that that guideline doesn't say "please don't post and then do nothing". It says "please don't post, but do email us so we can look into it". We do look into it, and on occasions when we find evidence, we act on it. There just needs to be something objective to go on, and in most cases there isn't.
The phenomenon of internet users being far too quick to jump to conclusions about astroturfing, bots, etc., is extremely well established. If there's one phenomenon we've learned about decisively over the years, that's the one. (Well, one of two.)
Btw, I've written about this a ton over the years (https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...). I particularly remember writing these two:
https://news.ycombinator.com/item?id=35932851 (May 2023)
https://news.ycombinator.com/item?id=27398725 (June 2021)
They're long, but they still hold up as descriptions of these phenomena and the moderation approach we take to them.
[flagged]
Companies use the same tactics as some states, bot campaigns, etc. The aim is to suppress, or at least diminish, the voices of opposition.
The flagged post is a perfect example. It contains just a fraction of factual information, but it was enough for bot farms to engage. Manipulators get mad at truth.
[flagged]
This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.
The browsers today are too bloated and it is difficult to create a new browser engine. I wish there were simpler standards for "minimal browser", for example, supporting only basic HTML tags, basic layout rules, WASM and Java bytecode.
Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.
> This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT
XSLT is a specification for a "template engine" and not a specific engine. There are dozens of XSLT implementations.
Mozilla notably doesn't use libxslt but transformiix: https://web.mit.edu/ghudson/dev/nokrb/third/firefox/extensio...
> and not Jinja for example?
Jinja operates on text, so it's basically document.write(). XSLT works on the nodes itself. That's better.
> Also it can be reimplemented using JS or WASM.
Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.
I think if you view XSLT as nothing more than ancient technology that nobody uses, then I can see how you could think this is ok, but I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.
I bet Google will try and solve this problem they're creating by pushing AMP again...
> The browsers today are too bloated
No, Google's browser today is too bloated: That's nobody's fault but Google.
> and it is difficult to create a new browser engine
I don't recommend confusing difficult to create with difficult to sell unless you're looking for a reason to not do something: There's usually very little overlap between the two in the solution.
I'm asking this genuinely, not as a leading question or a gotcha trap: why use this client side, instead of running it on the server and sending the rendered output?
For one, in many cases the XML + XSLT is more compact than the rendered output, so there are hosting and bandwidth benefits, especially if you're transforming a lot of XML files with the same XSLT.
That’s fascinating because I wouldn’t have expected it. What’s an example of when they rendered output would be bigger?
Imagine 1000 numbers in XML and a XSLT with xsl:for-each which renders a div with a label, textbox with the number and maybe a button. That's a simple example. Output would be a lot longer than XML+XSLT.
Ah, gotcha. Thanks for that. Ok, I could see why that’d be smaller, although I wonder now much compression could equalize it.
I think the obvious answer is that client side mapping would let the browser give different view of the data to the client. The obvious problem is that downloading all the data and then transforming is inherently inefficient (and sure, despite this, download-then-process is a common solution used for many problems - but it's problematic to specify the worst solution before you know the problem).
Perhaps there's an alternative universe where javascript lost and an elegant, declarative XSLT could declaratively present data and incrementally download only what's needed, allowing compact and elegant websites.
But in our universe today, this mapping language wound-up a half-thought-idea that just kicked around for a long time in the specs without ever making sense.
My gut instinct is to agree with every bit of that. I admit that I might be missing something, but I've never wanted to send the data once and then have the client view it in multiple transformed ways (minus simple presentation stuff like sorting a table by column and things like that).
And using it to generate RSS as mentioned elsewhere in the comments? That makes perfect sense to me on the server. I don't know that I've ever even seen client-side generated RSS.
But again, this may all be my own lack of imagination.
> I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.
Serving a server-generated HTML page could be even faster.
Maybe but PR author, who created the Issue there as well, gave example: 'JSON+React'. 'React' one of the slowest framework out there. Performance is rarely considered in contemporary front-end.
> Serving a server-generated HTML page could be even faster.
Except it isn't.
Lots of things could be faster than they are.
Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high. That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.
> Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high
XSLT is XML: It can be served with the XML as a single request.
You don't have any idea what you're talking about.
> That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.
Google.com used to be about a kilobyte. Now it's 100kb. I think it's absolutely clear Google either doesn't have the first idea how to make things fast, or doesn't care.
That assumes the server has a lot of additional CPU power to serve the content as HTML (and thus do the templating server side), whereas with XSLT I can serve XML and the XSLT and the client side can render the page according to the XSLT.
The XSLT can also be served once, and then cached for a very long time period, and the XML can be very small.
With server-side rendering you control the amount of compute you are providing, with client-side rendering you cannot control anything and if the app would be dog slow on some devices you can't do anything.
> Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.
Nobody is going to process million of DOM nodes with XSLT because the browser won't be able to display them anyway. And one can write a WASM implementation.
I think you're confusing throughput with latency.
You're right nobody processes a million DOM nodes with XSLT in a browser, but you're wrong about everything else: WASM has a huge startup cost.
Consider applying stylesheet properties: XSLT knows exactly how to lay things out so it can put all of the stylesheet properties directly on the element. Pre-rendered HTML would be huge. CSS is slow. XSLT gets you direct-attach, small-payload, and low-latency display.
That's even a rarer case, embedding CSS rules into XSLT template (if I understood you correctly), I never heard of it. I know that CSS is sometimes embedded into HTML though.
> Why should the browser contain a specific template engine, like XSLT,
XSLT is a templating language (like HTML is a content language), not a template engine like Blink or WebKit is a browser engine.
> Also it can be reimplemented using JS or WASM.
Changing the implementation wouldn't involve taking the language out of the web platform. There wouldn't need to be any standardization talk about changing the implementation used in one or more browsers.
The old, bug-ridden native XSLT code could also be shipped as WASM along with the browser rather than being deprecated. The sandbox would nullify the exploits, and avoid breaking old sites.
They actually thought about it, and decided not to do it :-/
> Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.
Audio and canvas are fundamental I/O things. You can’t shift them to WASM.
You could theoretically shift a fair bit of Audio into a WASM blob, just expose something more like Mozilla’s original Audio Data API which the Web Audio API defeated for some reason, and implement the rest atop that single primitive.
2D canvas context includes some rendering stuff that needs to match DOM rendering. So you can’t even just expose pixel data and implement the rest of the 2D context in a WASM blob atop that.
And shifting as much of 2D context to WASM as you could would destroy its performance. As for WebGL and WebGPU contexts, their whole thing is GPU integration, you can’t do that via WASM.
So overall, these things you’re saying could be done in WASM are the primitives, so they definitely can’t.
Why should the browser contain a specific scripting language, like JavaScript, and not ActiveScript for example?
I suspect you might know this, but Internet Explorer 3 supported JavaScript (JScript) and VBScript in 1996.
The browser could use Java or .NET bytecode interpreter - in this case it doesn't need to have a compiler and you can use any language - but in this case you won't be able to see a script's source code.
You already effectively can't see a scripts source code because we compile, minify, and obfuscate JS these days. Because the performance characteristics are so poor.
Actually, most of the time C# decompiles nicer from CLR bytecode than esoterically built JS.
It's a consequence of javascript being "good enough." Originally, the goal was for the web to support multiple languages (I think one prototype of the <script> tag had a "type=text/tcl") and IE supported VBScript for a while.
But at the end of the day, you only really need one, and the type attribute was phased out of the script tag entirely, and Javascript won.
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
It is actively used today.
Fair enough. Its use to denote other scripting languages was phased out.
You can still use it that way you just would either have a browser extension or a JavaScript file read the contents and use it. Here is a 2017 Stack Overflow thread for example: https://stackoverflow.com/questions/14015899/embed-typescrip...
BTW, over a third of court case management software in the US is run on VBScript hosted in IE7 compatibility mode.
> Why should the browser contain a specific template engine, like XSLT, and not Jinja for example?
Historic reasons, and it sounds like they want it to contain zero template engines. You could transpile a subset of Jinja or Mustache to XSLT, but no one seems to do it or care.
> and it sounds like they want it to contain zero template engines.
The funny thing? No, they want to create a new one: https://github.com/WICG/webcomponents/issues/1069
Adding XSLT support is as absurd as adding React into a browser (especially given that it's change detection is inefficient and requires lot of computation). Instead, browsers should provide better change tracking methods for JS objects.
Knockout.js may be off the radar these days, but has robust handling for this.
Still the best framework I've ever worked with.
The downside of knockout was that it used proxies for change tracking, and you had to create those proxies manually, so you cannot have an object with a Number property, you had to have an object with a proxy function as a property.
So instead of a complete browser engine we get a basic engine and we need to write the complete on top of it?
Sounds like Wayland
>Why should the browser contain a specific template engine, like XSLT
Because XSLT is part of the web standards.
I kind of agree that little used,[0] non-web-like features is fair to be considered for removal. However I wish they didn't hide behind security vulnerabilities as the reason as that clearly wasn't it. The author didn't even bother to look if a memory safe package existed. "We're removing this for your own good" is the worst way to go about it but he still doubles down on this idea later in the thread.
[0] ~0.001% usage according to one post there
> [0] ~0.001% usage according to one post there
This is still a massive number of people who are going to be affected by this.
https://news.ycombinator.com/item?id=44938747
I get what you're saying, but following this line of reasoning would mean that successful, wide-spread specifications, standards, and technologies must never drop any features. They would only ever accumulate new features, bloating to the point of uselessness, and die under the weight of their own success.
Nonsense. Following this line of reasoning is that putting percentages on billions is intellectually dishonest: You don't have to go any further than that. It is perhaps out of ignorance (now you know), but if you try to make it about anything else, that's just arguing in bad-faith.
Of course you can drop features, but if you work at Google I think you can pick something else, and you'll have a hard time convincing anyone that XSLT which was in Chrome back when it was fast, is why Chrome isn't fast anymore. And if you don't work at Google, why do you care? You've learned something new today. Enjoy.
It's not being dishonest. Software needs to be maintained. And google isn't the only web browser, nor should it be. It makes sense to re-evaluate which features make sense for the web. Flash and Java applets were both removed from web browsers and broke sites for millions of users, probably much more than XSLT would. But it was still the right call. This case is a bit more nuanced than those but I still think it's at least fair to discuss removing it.
> You've learned something new today. Enjoy.
Indeed: I learned that you're a condescending ass who doesn't engage with the actual argument I brought up.
> must never drop any features
On the web? That's about right. See Google's own document on this: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
It’s classic Google behaviour: “oh not used by a billion people? Didn’t get popular enough, axe it”.
They arguably became a victim of their own scale.
Compare webkit to UDK (The unreal development kit for game dev) to consider why there is so much bloat in the browser. People have wanted to render more and more advanced things, and the webkit engine should cater to all of them as best it can.
For better or worse, http is no longer just for serving textual documents.
Maya is the go to example of bloat for me for many of the same reasons.
While this sounds crazy at first, I could warm for several incremental layers of features, where browsers could choose to implement support for only a set of layers. The lowest layer would be something like HTTP with plain text, the next one HTML, then CSS with basic selectors, then CSS with the full selector set, then ECMA and WASM, then device APIs, and so forth.
Would make it possible to create spec-compliant browsers with a subset of the web platform, fulfilling different use cases without ripping out essentials or hacking them in.
There is no point in several layers because to maximize compatibility developers would need to target the simplest layer. And if they don't, simple browsers won't be able to compete with full-fledged ones.
You can set the doctype in the document to the spec you want to use, which is basically what you're asking for. Try setting <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.
I think a dedicated unsupported media type -> supported media type WASM transformation interface would be good. You could use it for new image formats and the like as well. There are things like JXL.js that do this:
https://github.com/niutech/jxl.js
I get the point a minimal browser and WASM, but Java bytecode ?! Why not Python bytecode ? Seems unreasonable to me to add any specific bytecode support. By layout rules you mean get rid of CSS ? Sounds also unreasonable IMHO.
And no WebAudio and Canvas couldn't be implemented in client WASM without big security implication. If by module you mean inside the browser, them, what is the point of WASM here ?
What WebAudio needs to provide is only means to get or push buffers from/to audio devices and run code in high priority thread. There is no need for browser to provide implementation of low-pass filters, audio proccessing graphs and similar primitives.
Honestly, even WASM makes it not very minimal in my book. A minimal browser should be HTML and perhaps a subset of CSS, that's it.
Wasm is ANYTHING but basic.
Fuck javascript, fuck wasm, fuck html, fuck css.
Rebase it all on XML/XPath/XQuery that way you only need ONE parser, one simple engine.
This whole kitchen sink/full blown OS nonsense needs to end.
Edit: You’re clearly a wasm shill, wasm is an abomination that needs to die.
Oh hey, that thing happened that one could easily see was going to happen [0]. The writing was on the wall for XSL as soon as the browsers tore out FTP support: their desire to minimize attack surface trumps any tendency to leave well enough alone.
I wonder what the next step of removing less-popular features will be. Probably the SMIL attributes in favor of CSS for SVG animations, they've been grumbling about those for a while. Or maybe they'll ultimately decide that they don't like native MathML support after all. Really, any functionality that doesn't fit in the mold of "a CSS attribute" or "a JS method" is at risk, including most things XML-related.
[0] https://news.ycombinator.com/item?id=43880391
CSS animations still lack a semantic way to sequence animations based on the beginning/end of some other animation, which SMIL offers. With SMIL you can say 'when this animation ID begins/ends only then trigger this other animation', including time offsets from that point.
Which is miles better than having to having to use calcs for CSS animation timing which requires a kludge of CSS variables/etc to keep track of when something begins/ends time-wise, if wanting to avoid requiring Javascript. And some years ago Firefox IIRC didn't even support time-based calcs.
When Chromium announced the intent to deprecate SMIL a decade back (before relenting) it was far too early to consider that given CSS at that time lacked much of what SMIL allowed for (including motion along a path and SVG attribute value animations, which saw CSS support later). It also set off a chain of articles and never-again updated notes warning about SMIL, which just added to confusion. I remember even an LLM mistakenly believing SMIL was still deprecated in Chromium.
> if wanting to avoid requiring Javascript.
And there's one of the issues: browser devs are perfectly happy if user JS can be used to replicate some piece of functionality, since then it's not their problem.
> their desire to minimize attack surface trumps any tendency to leave well enough alone.
Is that a good thing or a bad thing?
Technical people like us have our desires. But the billions of people doing banking on their browsers probably have different priorities.
There's ways to reduce attack surface short of tearing out support. Such as, for instance, taking one of those alleged JS polyfills and plugging it into the browser, in place of all the C++. But if attack surface is your sole concern, then one of those options sounds much easier than the other, and also ever-so-slightly superior.
In any case, there's no limit on how far one can disregard compatibility in the name of security. Just look at the situation on Apple OSes, where developers are kept on a constant treadmill to update their programs to the latest APIs. I'd rather not have everything trend in that direction, even if it means keeping shims and polyfills that aren't totally necessary for modern users.
It is a balance (compatibility vs attach surfaces). The issue with XSLT (which I am still a strong advocate for) is that nobody is maintaining that code. So vulnerabilities sit there undetected. Like the relatively recent discovery of the xsl:document vulnerability.
> It is a balance (compatibility vs attach surfaces).
What I'm trying to say is that it's a false dichotomy in most cases: implementations could almost eliminate the attack surface while maintaining the same functionality, and without devoting any more ongoing effort. Such as, for instance, JS polyfills, or WASM blobs, which could be subjected to the usual security boundaries no matter how bug-ridden and ill-maintained they are internally.
But removing the functionality is often seen as the more expedient option, and so that's what gets picked.
Sure, but this requires someone sitting down and writing the JS polyfill, and then maintaining it indefinitely. And for something as complicated as XSLT, that will surely be indefinite maintenance, because complicated specs beget complicated implementations.
In the absence of anyone raring to do that, removal seems the more sensible option.
The vendor discussion on removing XSLT is predicated on someone creating a polyfill for users to move to. It is not an unreasonable assumption because a polyfill can be created fairly trivially by compiling the existing XSLT processor to WASM.
And it is also fairly trivial to put that polyfill into the browsers.
The Chrome team has been moaning about XSLT for a decade. If security was really their concern they could have replaced the implementation with asm.js a decade ago, just as they did for pdfs.
> Sure, but this requires [...] maintaining it indefinitely.
Does it, though? Browsers already have existing XSLT stacks, which have somehow gotten by practically unmodified for the last 20 years. The basic XSLT 1.0 functionality never changes, and the links between the XSLT code and the rest of the codebase rarely change, so I find it hard to believe that slapping it into a sandbox would suddenly turn it into a persistent time sink.
Wasn't this whole discussion sparked by a fairly significant bug in the libxslt implementation? There's also a comment from a Chrome developer somewhere in this thread talking about regularly trying to fix things in libxslt, and how difficult that was because of how the library is structured.
So it is currently a persistent time sync, and rewriting it so that it can sit inside the browser sandbox will probably add a significant amount of work in its own right. If that's work that nobody wants to do, then it's difficult to see what your solution actually is.
The current problem is that bugs in libxslt can have big security implications, so putting it or an equivalent XSLT 1.0 processor in a safe sandbox would make active maintenance far less urgent, since the worst-case scenario would just be presentation issues.
As for immediate work, some in this thread have proposed compiling libxslt to WASM and using that, which sounds perfectly viable to me, if inefficient. WASM toolchains have progressed far enough that very few changes are needed to a C/C++ codebase to get it to compile and run properly, so all that's left is to set up the entry points.
(And if there really were no one-for-one replacement short of a massive labor effort, then current XSLT users would be left with no simple alternative at all, which would make this decision all the worse.)
> The writing was on the wall for XSL as soon as the browsers tore out FTP support
When did they do that? Can I not still ftp://example.com in the url bar?
FTP support was completely removed from Chrome with the release of Chrome 88, which was released in January 2021
> their desire to minimize attack surface trumps any tendency to leave well enough alone.
It's that why Chrome unilaterally releases 1000+ web APIs a year, many of them quite complex, and spanning a huge range of things to go wrong (including access to USB, serial devices etc.)? To reduce the attack surface?
Well, their desire to stay trendy trumps their desire to minimize attack surface, I'd have to imagine. Alas, XML is roughly the polar opposite of trendy, mostly seen as belonging in the trash heap of the 90s alongside SOAP, CORBA, DCOM, Java applets, etc.
How do we feel about this concern in general? Not just specific to XSLTs
> my main concern is for the “long tail” of the web—there's lots of vital information only available on random university/personal websites last updated before 2005
It's a strong argument for me because I run a lot of old webpages that continue to 'just work', as well as regularly getting value out of other people's old pages. HTML and JS have always been backwards compatible so far, or at least close enough that you get away with slapping a TLS certificate onto the webserver
But I also see that we can't keep support for every old thing indefinitely. See Flash. People make emulators like Ruffle that work impressively well to play a nostalgic game or use a website on the Internet Archive whose main menu (guilty as charged) was a Flash widget. Is that the way we should go with this, emulators? Or a dedicated browser that still gets security updates, but is intended to only view old documents, the way that we see slide film material today? Or some other way?
It seems like they've already created a browser extension that'll act as as polyfill [0]. Chrome just don't want to ship it & maintain it. Which is very similar to Ruffle.
[0]: https://chromewebstore.google.com/detail/xslt-polyfill/hlahh...
This would be sad, but I think it's sadder that we didn't spend more effort integrating more modern XSLT. It was painful to use _but_ if it had a few revisions in the browser I think it would have been a massive contender to things like React.
XML was unfairly demonized for the baggage that IBM and other enterprise orgs tied to it, but the standard itself was frigging amazing and powerful.
I have to agree. I liked XSLT and would have done much more with just a few additions to it.
Converting a simple manually edited xml database of things to html was awesome. What I mostly wanted the ability to pass in a selected item to display differently. That would allow all sorts of interactivity with static documents.
> @whatwg whatwg locked as too heated and limited conversation to collaborators
Too heated? Looked pretty civil and reasonable to me. Would it be ridiculous to suggest that the tolerance for heat might depend on how commenters are aligned with respect to a particular vendor?
"too heated" is a codeword for "we don't want to deal with dissenting opinions". Same on other forums, e.g. Reddit.
It's a little jarring that the 1 comment visible underneath that is a "Nice, thanks for working on this!", and if you click on the user that wrote it, it's someone working for Google on Chrome... sheesh, kiss-ass much?
FYI, I heard that it was Apple employees who administer that repo that marked those comments as off topic and locked the thread, but people are attributing that to the Google employee that opened the issue.
There was a discussion they opened to "gather community feedback" just three weeks ago. That one did get heated: https://github.com/whatwg/html/issues/11523
Google ignored everything, pushed on with the removal, and now pre-emptively closed this discussion, too
> Google ignored everything, pushed on with the removal, and now pre-emptively closed this discussion, too
To be fair to Google, they've consistently steam-rolled the standards processes like that for as long as I can remember, so it really isn't new.
I don't understand how this is any more fair to Google than the quoted statement.
I disagree - I saw a number of comments I would consider rude and unprofessional and once a PR gets posted on HN, frankly it typically gets much worse.
I find people on HN are often very motivated reasoners when it comes to judging civility, but there’s basically no excuse for calling people “fuckers” or whatever.
> Why do people create such joke PRs?
> We didn't forgot your decade of fuckeries, Google.
> You wanted some heated comment? You are served.
> the JavaScript brainworm that has destroyed the minds of the new generation
> the covert war being waged by the WHATWG
> This is nothing short of technical sabotage, and it’s a disgrace.
> breaking yet another piece of the open web you don't find convenient for serving people ads and LLM slop.
> Are Google, Apple, Mozilla going to pay for the additional hosting costs incurred by those affected by the removal of client-side XSLT support?
> Hint: if you don't want to be called out on your lies, don't lie.
> Evil big data companies who built their business around obsoleting privacy. Companies who have built their business around destroying freedom and democracy.
> Will you side with privacy and freedom or will you side with dictatorship?
Bullshit like this has no place in an issue tracker. If people didn’t act like such children in a place designed for productive conversation, then maybe the repo owners wouldn’t be so trigger happy.
[flagged]
I love XSLT. I released a client-side XSLT-based PWA last year (https://github.com/ssg/eksi-yedek - in Turkish). The reason I had picked XSLT was that the input was in XML, and browser-based XSLT was the most suitable candidate for a PWA.
Two years ago, I created a book in memory of a late friend to create a compilation of her posts on social media. Again, thanks to XSLT, it was a breeze.
XSLT has been orphaned on the browser-side for the last quarter century, but the story on the server-side isn't better either. I think that the only modern and comprehensive implementation comes with Saxon-JS which is bloated and has an unwieldy API for JavaScript.
Were XSLT dropped next year, what would be the course of action for us who rely on browser-based XSLT APIs?
XSLT, especially 3.0, is immensely powerful, and not having good solutions on JS ecosystem would make the aftermath of this decision look bleaker.
I’d just use the browsers XML parser and javascript for the transformation. Which is what I assume a putative XSLT javascript library would do.
And if you’re leaning towards a declarative framework, use React.
There are many declarative frameworks. React is one of the slowest.
Fwiw the XSLT implementation in Blink and WebKit is extremely inefficient. For example converting the entire document into a string, to parse it to a format that's compatible with libxslt, to then produce a string and parse it back into a node structure again. I suspect a user space library could be similarly as effective.
Ex. https://source.chromium.org/chromium/chromium/src/+/main:thi...
https://source.chromium.org/chromium/chromium/src/+/main:thi...
https://github.com/WebKit/WebKit/blob/65b2fb1c3c4d0e85ca3902...
Mozilla has an in-house implementation at least:
https://github.com/mozilla-firefox/firefox/tree/5f99d536df02...
It seems like the answer to the compat issue might be the MathML approach. An outside vendor would need to contribute an implementation to every browser. Possibly taking the very inefficient route since that's easy to port.
I have no opinion on this, just sharing my one-and-only XSLT story.
My first job in software was as a software test development intern at a ~500 employee non-profit, in about 2008 when I was about 19 or 20 years old. Writing software to test software. One of my tasks during the 2 years I worked there was to write documentation for their XML test data format. The test data was written in XML documents, then run through a test runner for validation. I somehow found out about XSLT and it seemed like the perfect solution. So I wrote up XML schemas for the XML test data, in XSD of course. The documentation lived in the schema, alongside the type definitions. Then I wrote an XSLT document, to take in those XML schemas and output HTML pages, which is also basically XML.
So in effect what I wrote was an XML program, which took XML as input, and outputted XML, all entirely in the browser at document-view time.
And it actually worked and I felt super proud of it. I definitely remember it worked in our official browser (Internet Explorer 7, natch). I recall testing it in my preferred browser, Firefox (version 3, check out that new AwesomeBar, baby), and I think I got it working there, too, with some effort.
I always wonder what happened with that XML nightmare I created. I wonder if anyone ever actually used it or maybe even maintained it for some time. I guess it most likely just got thrown away wholesale during an inevitable rewrite. But I still think fondly back on that XSLT "program" even today.
My XSLT story:
I wrote my personal website in XML with XSLT transforming into something viewable in the browser circa 2008. I was definitely inspired by CSS Zen Garden where the same HTML gave drastically different presentation with different CSS, but I thought that was too restrictive with too much overly tricky CSS. I thought the code would be more maintainable by writing XSLT transforms for different themes of my personal website. That personal webpage was my version of the static site generator craze: I spent 80% of the time on the XSLT and 20% on the content of the website. Fond memories, even though I found XSLT to be incredibly difficult to write.
Ha! Shout out to CSS Zen Garden. I didn't go as far down the rabbit hole as you did (noped out before XSLT made its way into my mix), but around that time I made sure all of my html was valid XML (er, XHTML), complete with the little validation badge at the bottom of the page. 80:20 form to content ratio sounds about right.
Another fellow soul!
My first rewrite of my site, as I moved it away from Yahoo, into my own domain was also in XSLT/XML.
Eventually I got tired of keeping it that way, and rewrite the parsing and HTML generation into PHP, but kept the site content in XML, to this day.
Every now and then I think about rewriting it, but I rather do native development outside work, and don't suffer from either PHP nor XML allergies.
Doing declarative programming in XSLT was cool though.
almost same. wrote a xml cms and then the xslt into html... then realized I would have to continue to write xml and said hell no! and rewrote the whole thing with php and a mysql db.
I implemented the full XPath and XSLT language with debugging capabilities for a company I used to work for some 25ish years ago. It was fun (until XPath and XSLT 2. Well that was fun too but because of nice work colleague not the language) but I always did wonder how this took off and Lisp didn’t.
Blame the java people, they always over engineered and those 25 years ago they still had a voice.
After the XML madness whenever I see some tech being hyped and used all over the place I remember the days of XML and ignore it.
I was quite fond of DokuWiki’s xml-rpc. Probably long replaced now but it was a godsend to have a simple rpc to the server from within javascript. (2007)
I once attempted to use XSLT to transform SOAP requests generated by our system so the providers' implementation would accept them. This included having to sufficiently grok XSD, WSDL el at to figure out what part of the chain is broken.
At the end of the (very long) process, I just hard-coded the reference request XML given by the particularly problematic endpoints, put some regex replacements behind it, and called it a day.
“Yo dawg, I heard you liked XML …”
We can laugh at NFTs but honestly there are a lot of technical solutions that fit the "kinda works/kinda seems like a good idea" but in the end it's a house of cards with a vested interest
Imagine people put energy into writing that thick of a book about XML. To be filed into the Theology section of a library
Except the only selling point for NFTs was laundering money and scamming people.
It's not like the browsers can just switch to some better maintained XSLT library. There aren't any. There are about 1.5 closed-source XSLT 3 implementations, Altova and Saxonica. I don't want to sound ageist, but the latter is developed by the XSLT spec's main author, who is nearing retirement age. This library is developed behind closed doors, and from time to time zip files with code get uploaded to GitHub. Make of that what you will in terms of the XSLT community. For all of its elegance, XSLT doesn't seem very relevant if nobody is implementing it. I'm all for the open web, but XSLT should just be left in peace to slide into the good night.
Saxonica is an Employee Ownership Trust and the team as a whole is relatively young (far off from retirement).
"Saxonica today counts some of the world's largest companies among its customer base. Several of the world's biggest banks have enterprise licenses; publishers around the world use Saxon as a core part of their XML workflow; and many of the biggest names in the software industry package Saxon-EE as a component of the applications they distribute or the services they deploy on the cloud."
https://www.saxonica.com/about/about.xml
So what do you think about: https://github.com/Paligo/xee ?
Best comment from another related thread (not from me):
So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913 Shortly after that, Google Chrome want to remove XSLT support.
Coincidence?
Source (yawaramin): https://news.ycombinator.com/item?id=44925104
PS: Seems libxslt which is used by Blink has an (unpaid) maintainer but nothing going on there really, seems pretty unmaintained https://gitlab.gnome.org/GNOME/libxslt/-/commits/master?ref_...
PS2: Reminds me all of this https://xkcd.com/2347/ A shame that libxml and libxslt could not get more support while used everywhere. Thanks for all the hard work to the unpaid volunteers!
This seems totally fine though? XSLT 1.0 supporter says the support time is costing heavily, then Chrome says removing support is fine, which seems to align to both of them.
It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:
> "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"
Mozilla doesn't use libxslt
I used XSLT once to publish recipes on the web. The cookbook software my mom used (maybe MasterCook?) could "export as xml" and I wrote an xslt to transform it into readable html. It was fine. It's, of course, also possible to run the XSLT from the command line to generate static html.
The suggestion of using a polyfill is a bit nonsensical as I suspect there is little new web being written in XSLT, so someone would have to go through all the old pages out there and add the polyfill. Anyone know if accomplishing XSLT is possible with a Chrome extension? That would make more sense.
It would sure be possible to combine a polyfill with a webextension, not sure if XSLT contains any footguns for this approach that would make it hard to do, but if it's solely a single client-side transformation of the initial XML response, this should work fine.
Cool example with the recipes page :)
I guess it's time for me to write that webextension; if it gets popular enough I can sell it to someone wearing a black hat for maybe tens of dollars!
haha good point :(
There are also very valid comments in there about why removal would still hurt existing sites and applications, especially for embedded devices.
https://github.com/whatwg/html/pull/11563#issuecomment-31970...
The idea of building something like PDF.js makes a lot of sense. I think the core crux of it though is the polyfill should be in the browser, not something that a site maintainer has to manually implement.
We absolulely shouldn't be just ripping out support.
If there is a polyfill I'm not sure making it in Javascript makes sense but web assembly could work.
I love how one company can do whatever they want. This is perfect.
Like how every other browser vendor is supportive and the issue was actually raised by Mozilla?
See the agenda here: https://github.com/whatwg/html/issues/11131#issuecomment-274...
How do you think we got into this mess in the first place? First it was Netscape, then Microsoft, now Google.
Yet, the web has been prospering for two decades in spite of the quasi-monopoly state of browsers. It's the living evidence that the dominant browser vendor doesn't has as much power as people imagine.
> prospering
Like 90%+ of internet traffic goes to a handful of sites owned by tech giants. Most of what's left is SEO garbage serving those same tech giants' ad networks.
The web has been prospering?
Obviously not things like blogs, or things you’d find via search, or independent forums, or newspaper websites. They certainly aren’t prospering.
But walled gardens like YouTube, Discord, ChatGPT and suchlike that are delivered via the browser are prospering. And as a cross platform GUI system, html is astonishingly popular.
(things which are not the web but happen to be delivered by the same protocols)
"One of the most adopted technologies, the one that is permeating into even native desktop and mobile apps, is not prospering." - HN users, probably.
There's a difference between web technologies and "the web" as an amorphous philosophical construct. Web technologies, as you stated, are obviously doing just fine. I'd argue the latter isn't. To be more specific, the latter as it was envisioned (in a way that I, and I speculate, GP also still subscribe to) 20+ years ago.
"The web" implies an interconnected ecosystem of websites. That the same tech has found adoption with walled gardens is irrelevant.
What is Google? Or Amazon?
I’m sure you can come up with more examples of extremely high value business which would not have happened without the web.
I don't think we all necessarily agree that "high value businesses" is the same as "prospering". If you mean "prospering" as in "making some people rich", sure, but if you mean "being beneficial to society at large", it's certainly debatable.
More like three decades, but I get your point. ;) I remember running Netscape 0.9 something back in 1994.
I remember when you could go down to Circuit City and buy a web browser for money. It came in a cardboard box. Shortly after the era you describe.
But hey it’s totally not a monopoly!
“Free markets and capitalism”. They give us superior, user centered, transparent products.
I had no idea what XSLT even was until today. Reading the submission, the thread linked by u/troupo below, and Wikipedia, I find that it's apparently used in RSS parsing by browsers, because RSS is XML and then XSLT is "originally designed for transforming XML documents into other XML documents" so it can turn the XML feed into an HTML page
I agree RSS parsing is nice to have built into browsers. (Just like FTP support, that I genuinely miss in Firefox nowadays, but allegedly usage was too low to warrant the maintenance.) I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use". Okay, why not stuff that polyfill into the browser then? Then it's already inside the javascript sandbox that you need to stay secure anyway, and everything just stays working as it was. Replacing some C++ code sounds like a win for safety any day of the week
On the other hand, I don't normally view RSS feeds manually. They're something a feed parser (in my case: Blogtrottr and Antennapod) would work with. I can also read the XML if there is a reason for me to ever look at that for some reason, or the server can transform the RSS XML into XHTML with the same XSLT code right? If it's somehow a big deal to maintain, and RSS is the only thing that uses it, I'm also not sure how big a deal it is to have people install an extension if they view RSS feeds regularly on sites where the server can do no HTML render of that information. It's essentially the same solution as if Chrome would put the polyfill inside the browser: the browser transforms the XML document inside of the JS sandbox
It's much more general purpose than that. RSS is just XML after all. XSLT basically lets you transform XML into some other kind of markup, usually HTML.
I think the principle behind it is wonderful. https://www.example.com/latest-posts is just an XML file with the pure data. It references an XSLT file which transforms that XML into a web page. But I've tried using it in the past and it was such a pain to work with. Representing things like for loops in markup is a fundamentally inefficient thing to do, JavaScript based templating is always going to win out from the developer experience viewpoint, especially when you're more than likely going to need to use JS for other stuff anyway.
It's one of those purist things I yearn for but can never justify. Shipping XML with data and a separate template feels so much more efficient than pre-prepared HTML that's endlessly repetitive. But... gzip also exists and makes the bandwidth savings a non-issue.
RSS likely isn't the only thing that uses it. XSLT is basically the client side declarative template language for XML/HTML that people always complain doesn't exist (e.g. letting you create your own tags or do includes with no server or build steps).
I understand that there are more possible uses for the tool, but RSS is the only one I saw someone mention. Are there more examples?
It may be that I don't notice when I use it, if the page just translates itself into XHTML and I would never know until opening the developer tools (which I do often, fwiw: so many web forms are broken that I have a habit of opening F12, so I always still have my form entries in the network request log). Maybe it's much more widespread than I knew of. I have never come across it and my job is testing third-party websites for security issues, so we see a different product nearly every week (maybe those sites need less testing because they're not as commonly interactive? I may have a biased view of course)
It's by far the easiest way to do templated pages. I use it for my personal stuff (e.g. photo albums I share with my mom), but I can't imagine Google cares about the non-commercial web.
I think I've read some governments still use it, which would make sense since they usually don't have a super high budget for tons of developers, so they have to stick to the easy way to do things.
Right, that sounds like a blind spot of mine as well. We test nearly only commercial products (or open source projects large enough to get commercial backing), and in private time, of course I'd come across big websites sooner than across small ones. Still, I'm surprised I never even heard of it (also considering we literally had a class on XML and the features, like these DTDs that I never found a use for in the decade since). Sounds like I should look into XSLT, since I also build a lot of small tools and simple old tech is generally right up my alley!
I use it to maintain our product catalog at work. The server does the final rendering of the complete document but as a page is getting edited the preview is getting rendered in the browser. Back to what everyone is saying, this isn't important enough to move the needle for people making these decisions.
Almost every single government organization uses it to publish their official documents. Lots of major corporations too.
As much of a monopoly as Chrome is, if they actually try to remove it they're likely to get a bunch of government web pages outright stating "Chrome is unsupported, please upgrade to Firefox or something".
Huh? I mainly see official government documents as annoying PDFs. Thankfully someone had the bright idea to turn the national law's text into a proper webpage and not use an image-like format for that. (I think regional governments also publish laws as PDF though.) Double checking now, yes: that's definitely HTML and not a transformed XML
Which government or governmental organizations are you talking about?
Yes, PDF documents which are generated using XSL-FO (XSL Formatting Objects) from an xml source document
Ah right, so that wouldn't be affected by this change because it happens all server-side if I understand it correctly?
Looks like someone pointed out in the related thread yesterday[0] that this little known site[1] is using it for client-side templating.
[0] https://news.ycombinator.com/item?id=44909599
[1] https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
Would be a real shame if the world had a harder time engaging with that site
(but I take your point that there exists at least government in the world that uses it)
> Are there more examples?
Practically every WordPress site with one of the top two SEO plugins (I'm not familiar with others) serves XML sitemaps with XSLT. It's used to make the XML contents human readable and to add a header explaining what it is.
Did you ever use a sitemap as a human? I've only ever seen it recommended for SEO, and search engines are perfectly capable of parsing sitemap.xml without needing it turned into some transformed format, or at least so was my understanding (been a while since I looked into sitemaps or SEO). It seems to only be linked in robots.txt, not to any humans: https://www.sitemaps.org/protocol.html#informing
Every (Wordpress) site with an SEO plugin should be fine, since the search engines can still read it and that's the goal of an SEO plugin
> I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use".
Especially considering the amount of complex standards they have qualms about from WebUSB to 20+ web components standards
> On the other hand, I don't normally view RSS feeds manually.
Chrome metrics famously underrepresent corporate installation. There could be quite a few corporate applications using XSLT as it was all the rage 15-20 years ago.
My guess is that they're fine with WebBluetooth/USB/FileSystem/etc. because the code for the new standard is recent and sticks with modern security sensibilities.
XSLT (and basically anything else that existed when HTML5 turned ten years old) is old code using old quality standards and old APIs that still need to be maintained. Browsers can rewrite them to be all new and modern, but it's a job very few people are interested in (and Google's internal structure heavily prioritizes developing new things over maintaining old stuff).
Nobody is making a promotion by modernizing the XSLT parser. Very few people even use XSLT in their day to day, and the biggest product of the spec is a competitor to at least three of the four major browser manufacturers.
XSLT is an example of exciting tech that failed. WebSerial is exciting tech that can still prove itself somehow.
The corporate installations still doing XSLT will get stuck running an LTS browser like they did with IE11 and the many now failed strains of technology that still supports (anyone remember ActiveX?).
We pentest lots of corporate applications so if this was widespreadly deployed in the last ~8 years that I've been doing the job full time, I don't know how I would have missed it (like, never even saw a talk about it, never saw a friend using it, never heard a colleague having to deal with it... there's lots of opportunities besides getting such an assignment myself). Surely there are talks on it if you look for it, just that I haven't the impression that this is a common corporate thing, at least among the kinds of customers we have (mainly larger organizations). A sibling comment mentions they use it on their hobby site though
XSLT was the blockchain, nft, metaverse of the mid?-2000s. Was totally going to solve all of our problems.
I thought XML was that big hype, not XSLT. That I somehow never saw mentioned that you can do actual webpages and other useful stuff with it is probably why I never understood why people thought XML was so useful ^^' I thought it was just another data format like JSON or CSV, and we might as well have written HTML as {"body":{"p":"Hello, World!"}} and that it's just serendipity that XML was earlier
XML was the data storage - IBM DB9 supported it natively in a similar way to how Postgres supports jsonb.
You'd use XSLT to translate your data into a webpage. Or a mobile device that supported WML/WAP. Or a desktop application.
That was the dream, anyhow.
Xslt actually solved a lot of problems for me last week in processing json to relational data
Huh! I'm learning a lot here today. Trying to find more info, indeed the top answer on stackoverflow on the "XSLT equivalent for JSON" is XSLT itself: https://stackoverflow.com/a/49011455/. Hard to find how you'd actually use it though, basically all results I get for "xslt json" are about different tools that convert between JSON and XML
At the time I ran across lots of real websites using it. I successfully used it myself at least once too. Off the top of my head, Blizzard was using it to format WoW player profiles for display in the browser.
But XSLT is at least actually useful
So is metaverse, at least depending on the definition. Second Life is mentioned as an example of one on Wikipedia and that died pretty quickly because it was more of a mechanism instead of a destination in itself. The general concept of hanging out online with an avatar and friends is not gone at all
5G was another hype word. Can't say that's not useful! I don't really notice a difference with 4G (and barely with 3G) but apparently on the carrier side things got more efficient and it is very widely adopted
I guess there's a reason the Gartner hype cycle ends with widespread adoption and not with "dead and forgotten": most things are widely picked up for a reason. (Having said that, if someone can tell me what the unique selling point of an NFT was, I've not yet understood that one xD)
This is tragic. I believe we should have gone the other way and included xslt 3.0 in the baseline browser requirements.
Actually, I think removing XSLT is bad because it means we are more tied to javascript or other languages for XML transformation instead of a language designed for this specific purpose, a DSL.
Which means more unreadable code.
But if they decide to remove XSLT from spec, I would be more than happy if they remove JS too. The same logic applies.
having browsers transform XML data into HTML via XSLT is a cool feature, and it works completely statically, without any server-side or client-side code. Would be a shame if that was removed. I have a couple dozen XML databases that I made accessible in a browser using xslt...
So annoying, XSLT is very powerful but browsers let it languish at 1.0
XSLT 1.0 is still useful though, and absolutely shouldn't be removed.
Them: "community feedback" Also them: <marks everything as off topic>
This came about after the maintainer of libxml2 found giving free support to all these downstream projects (from billionaire and trillionaire companies) too much.
Instead of just funding him, they have the gall to say they don't have the money.
While this may be true in a micocosm of that project, the devs should look at the broader context and who they are actually working for.
The XSLT juice is worth the squeeze, but only to a tiny minority of users, and there's costly rewrites to do to keep XSLT in there (for Chrome, at least.)
Here's what I wish could happen: allow implementers to stub out the XSLT engine and tell users who love it that they can produce a memory-safe implementation themselves if they want the functionality put back in. The passionate users and preservationists would get it done eventually.
I know that's not a good solution because a) new xslt engine code needs to be maintained and there's an ongoing cost for that for very few users, b) security reviews are costly for the new code, c) the stubs themselves would probably be nasty to implement, have security implications, etc. And, there's probably reasons d-z that I can't even fathom.
It sucks to have functionality removed/changed in the web platform. Software must be maintained though; cost of doing business. If a platform doesn't burden you with too much maintenance and chooches along day after day, then it's usually a keeper.
This proposal seems to be aimed at removing native support in favor of a WASM-based polyfill (like PDF.js, I guess) which seems reasonable?
Google definitely throws its weight around too much w.r.t. to web standards, but this doesn't seem too bad. Web specifications are huge and complex so trying to size down a little bit while maintaining support for existing sites is okay IMO.
No, that would indeed be reasonable, but the proposal is to remove XSLT from the standard and remove Chrome support for XSLT entirely, forcing websites to adopt the polyfill themselves.
Which is, to me, silly. If you ship the polyfill then there's no discussion to be had. It works just the same as it always has for users and it's as secure as V8, no aging native codebase with memory corruption bugs to worry about.
> It works just the same as it always has for users
No it doesn't. An HTML page constructed with XSLT written 10 years ago will suddenly break when browsers remove XSLT. The webmaster needs to add the polyfill themselves. If the webmaster doesn't do that, then the page breaks.
From a user perspective, it only remains the same as before if the webmaster adopts the polyfill. From the web developer perspective, this is a breaking change that requires action. "shipping the polyfill" requires changes on many many sites - some of which have not needed to change in many years.
It may also be difficult to do. I'm not sure what their proposed solution is, but often these are static XML files that include an XSLT stylesheet - difficult to put JS in there.
I meant the browser shipping the polyfill themselves. I'm in agreement with you.
At the moment, XSLT in a browser doesn't depend on Javascript, so works even if JS is turned off. Using a polyfill instead will mean that XSLT will only work if JS is turned on.
That depends how the browsers implement it, no? Much of modern browser's user interface is also built using web technologies including JS and that doesn't break if you "disable JS".
Last I checked, it’s a polyfill that Chrome won’t default include - they’re just saying that they’d have a polyfill in JS and it’s on site authors to use.
That breaks old unmaintained but still valuable sites.
As a user you can only use the polyfill to replace the XSLTProcessor() JavaScript API. You can't use the polyfill if you're using XSLT for XML Stylesheets (<?xml-stylesheet … ?> tags).
(But of course, XML Stylesheets are most widely used with RSS feeds, and Google probably considers further harm to the RSS ecosystem as a bonus. sigh)
Moz also has no love for RSS, having removed support for live bookmarks in Firefox 64 (2018) and no longer displaying the RSS icon anywhere in the UI when a website has any <link rel="alternate" type="application/rss+xml"> tags. If you want to subscribe to feeds you have to jump through a bunch of hoops instead of it being a single click.
Fortunately, Thunderbird still has support for feeds and doesn't seem to have been afflicted by the same malaise as the rest of the org chart. Who knows how long that will last.
Ah, okay. I guess that's another one I'll add to the list of hostile actions towards the web then.
I completely understand the security and maintenance burdens that they're bringing up but breaking sites would be unacceptable.
The polyfills are something devs have to include and use. It means all the pages that cannot be updated will be broken.
The polyfills suggested are for the servers to do the transforms, not the browser.
Setting aside the discussion of the linked issue itself (tone, comments, etc), I feel like I need to throw this out there:
I don't understand the point in having a JS polyfill and then expecting websites to include it if they want to use XSLT stuff. The beauty of the web is that shit mostly just works going back decades, and it's led to all kinds of cool and useful bits of information transfer. I would bet money that so much of the weird useful XSLT stuff isn't maintained as much today - and that doesn't mean it's not content worth keeping/preserving.
This entire issue feels like it would be a nothing-burger if browser vendors would just shove the polyfill into the browser and auto-run it on pages that previously triggered the fear-inducing C++ code paths.
What exactly is the opposition to this? Even reading the linked issue, I don't see an argument against this that makes much sense. It solves every problem the browser vendors are complaining about and nothing functionally changes for end users.
Chrome is a browser – it can’t remove something from the spec. Perhaps this should say Google proposes to remove it from the spec.
Chrome is the dominant browser. Sad as this may be removing it from Blink means de facto removing it from the spec.
That being said, I'm not against removing features but neither this or the original post provide any substantial rationale on why it should be removed. Uses for XSLT do exist and the alternative is "just polyfill it" which is awkward especially for legacy content.
But a browser doesn’t have agency – it’s Google that is doing this.
By metonymy it's referring to the browser's owner.
Not sure if you missed it, but a few days before this PR, Google did propose removing it from the spec.
But that's my point - Google is proposing removing it from the spec. It's kinda weird to reformulate it for the headline as 'Chrome' is doing it.
I don't get the people complaining that they need it on their low-power microcontrollers yet instead of using an XSLT library they'd rather pull in Chromium.
With how bloated browsers are right now, good riddance IMO
They are not talking about pulling in Chromium on a microcontroller. Their web server is on a microcontroller, so they want to minimize server side CPU usage and force the browser to do their XSLT transformation.
Since it's a microcontroller, modifying that server and pushing the firmware update to users is probably also a pain.
Unusual use case, but an reasonable one.
Yeah, I don't think XML + XLST is any better than or allows anything that sending say JSON and transforming it with JS wouldn't. However that would require changing the firmware, which as you mention may be difficult or impossible.
I think they're talking about outputting XML+XSLT on those microcontrollers, i.e. just putting out text. Chromium would come in for the viewer who's loading whatever tiny status-webpage those microcontrollers are hosting on a separate device.
There are better candidates to remove from the spec than XSLT, like HTML. The parsing rules for HTML are terrible and it hinders further advancement of the spec more than anything. The biggest mistake of HTML was back peddling on the switch to XHTML.
Removal of anything is problematic though, better off freezing parts of the spec to specific compatibility versions and getting browsers to ship optional compatibility modes that let you load and view old sites.
I remember having built a static site that was 100% xml data and xslt transformers in the early 2000s
Quite fun at the time
I saw XSLT used to transform RSS feeds into something nicely human readable. That is, the RSS feed was referencing the XSLT. Other than that I haven't noticed the use of XSLT on the web.
IBM owns a very high-performance XSLT engine they could probably open source or license to the browser makers. IF anyone from IBM is here (?), may want to consider it..
If security and memory-safety is a concern and there is already a polyfill, why remove the API form the standard instead of just using the WASM-based polyfill internally?
They want to punt a half-baked polyfill over the wall and remove support from the browser so they don't have to do any maintenance work, making it someone else's problem.
If this is in response to Nick Wellnhofers announcement from three months ago to stop embargoing/priorizing libxlst/libxml2 CVEs due to lack of manpower (which I suspect is a consequence of flooding projects with bogus LLM-generated findings from students wanting to butter their profile), wouldn‘t it be possible to ship an emscripten-compiled libxslt implementation instead of libxslt proper?
Or just Xee.
So Google is bringing the deprecation treadmill to the web, yay!
Yegge called it:
https://steve-yegge.medium.com/dear-google-cloud-your-deprec...
"""
> Because I sometimes get similar letters from the Google Cloud Platform. They look like this:
>> Dear Google Cloud Platform User,
>> We are writing to remind you that we are sunsetting [Important Service you are using] as of August 2020, after which you will not be able to perform any updates or upgrades on your instances. We encourage you to upgrade to the latest version, which is in Beta, has no documentation, no migration path, and which we have kindly deprecated in advance for you.
>> We are committed to ensuring that all developers of Google Cloud Platform are minimally disrupted by this change.
>> Besties Forever,
>> Google Cloud Platform
> But I barely skim them, because what they are really saying is:
>> Dear RECIPIENT,
>> Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore. So drop your fucking plans and go start digging through our shitty documentation, begging for scraps on forums, and oh by the way, our new shit is COMPLETELY different from the old shit, because well, we fucked that design up pretty bad, heh, but hey, that’s YOUR problem, not our problem.
>> We remain committed as always to ensuring everything you write will be unusable within 1 year.
>> Please go fuck yourself,
>> Google Cloud Platform
"""
But if you live in a capitalist country with a free market, several competitors should pop out and suggest migrating your system into their cloud for free, shouldn't they? No way capitalist overlooks an unoccupied market niche.
Oh look over there, is that Azure?
Intent to remove: emergency services dialling (911, 112, 000, &c.)
Almost no one ever uses it: metrics show only around 0.02% of phone calls use this feature. So we’re planning on deprecating and then removing it.
—⁂—
Just an idea that occurred to me earlier today. XSLT doesn’t get a lot of use, but there are still various systems, important systems, that depend upon it. Links to feeds definitely want it, but it’s not just those sorts of things.
Percentages only tell part of the story. Some are tiny features that are used everywhere, others are huge features that are used in fewer places. Some features can be removed or changed with little harm—frankly, quite a few CSS things that they have declined to address on the grounds of usage fall into this category, where a few things would be slightly damaged, but nothing would be broken by it. Other features completely destroy workflows if you change or remove them—and XSLT is definitely one of these.
Do we know Webkit, KHTML and Gecko's stand on this?
I know this is for security reason but why not update the XSLT implementation instead. And if feature that aren't used get dropped, they might as well do it all in one good. I am sure lots of HTML spec aren't even used.
WebKit's in favor: https://github.com/whatwg/html/issues/11523#issuecomment-314...
"Cautiously" in favour.
If it was just for security reasons, they could sponsor FOSS development on the implementation.
I am of the opinion that it is to remove one of the last ways to build web applications that don't have advertising and tracking injected into them.
I get the impression they are ripping it out because they don't want to sponsor the FOSS volunteer working on it or deal w/ maintaining it themselves. The tracking/advertising take doesn't hold much water for me as adding those things to the page is something developers and companies choose to do. You could just as easily inject a tracking script tag or pixel or whatever via XSLT during transformation if you wanted.
> I am of the opinion that it is to remove one of the last ways to build web applications that don't have advertising and tracking injected into them.
Er, how so? What stops you from doing so in HTML/JS/CSS ?
KHTML has been discontinued and was barely maintained for several years before. It has not been a relevant party for about a decade if not more.
Despite rather heated discussion just three weeks they started just two weeks prior https://github.com/whatwg/html/issues/11523
It’s another “we listened the community and nobody told us no” moment. Like Go’s telemetry issue.
Google is boneheaded and hostile to open web at this point, explicitly.
> It’s another “we listened the community and nobody told us no” moment. Like Go’s telemetry issue.
Go changed their telemetry to opt-in based on community feedback, so I'm not sure what point you're trying to make with that example.
No. The official statement from Brian was “I received a couple of personal e-mails from some credible people who stated that their data belonged to them, so we (I) decided to make it opt-in” (paraphrased).
I spent days in that thread. That uproar was “a bunch of noisy minority which doesn’t worth listening” for them.
It's weird to see you try to make hay out of Google doing the thing you actually wanted them to do.
Please see my other comment in the same thread.
where is this official statement, I don't think you even managed to get the name right
Sorry. Was at dinner. It's Russ Cox. Being hungry and in a hurry doesn't help with remembering names.
The GitHub discussion is there: https://github.com/golang/go/discussions/58409
but the words I of Russ I cited is here: https://groups.google.com/g/golang-dev/c/73vJrjQTU1M/m/WKj7p...
Copying verbatim:
So as a person who just started programming Go and made some good technical comments didn't matter at all. Only people with clout has mattered, and the voice had to come from the team itself. Otherwise we the users' influence is "fuck all" (sorry, my blood boils every time I read this comment from Russ).I mean yeah, I too would probably prefer to read a few well-reasoned arguments over email than to wade through hundreds of hateful, vitriolic, accusatory comments from randos in a GitHub thread. Being an open-source maintainer is hard.
Or, you know, do the right thing from the start considering that forced telemetry you have to opt-out of is universally reviled and every project that includes it suffers from literally the same issues.
Looks like they're going to ram it through anyway, no matter the existing users. There's got to be a better way to deal with spam than just locking the thread to anyone with relevant information.
WHATWG literally forced W3C to sign a deal and obey their standards. WHATWG is basically Google + Apple + Microsoft directly writing the browser standards. Fixing Microsoft's original mistake of Internet Explorer of not creating a faux committee lol.
w3c architecture astronauts have no place dictating standards that they can't implement.
"Heated discussion" sounds like any comment voicing legitimate concern being hidden as "off-topic", and the entire discussion eventually being locked. Gives me Reddit vibes, I hope this is not how open web standards are managed.
If it's a security issue, shouldn't the browsers just replace C++ code with the JS or WASM polyfill themselves?
I also wondered about that. They probably don't want to do that because of maintaining, fixing and allocating resources to it then.
Probably a browser extension on the user side can do the same job if an XSLT relying page cannot be updated.
This seems like the kind of thing that won't require any resources to maintain, other than possible bugfixes (which 3rd parties can provide). It only requires parsing and DOM manipulation, so it doesn't really require any features of JS or WASM that would be deprecated in the future, and the XSLT standard that is supported by browsers is frozen - they won't ever have to dedicate resources to adding any additional features.
That is an interesting approach, you could suggest it? In general using JS to implement web APIs is very difficult, but using WASM might work especially for the way XSLTProcessor works today.
This is disappointing. I was using XSLT for transforming SVGs, having discovered it early last year via a chat. Even despite browsers only shipping with v1.0 it still allowed a quite compact way to manipulate them without adding some extra parser dependency.
Ouch. Two of my old web sites use XSLT, as a way to display info from a database on administrative pages. I guess it's time to kill off those sites.
The web is so far gone at this point, they should probably remove everything but wasm.
At least that's how my cynical side feels anymore.
Wait, all the web browsers had XSLT support all along?
I remember using these things in a CSCI class, and, IIRC, we were using something akin to Tomcat to do transformations on the server, before serving HTML to the browser, circa 2005/2006.
I had to look up what XSLT was (began working professionally as a programmer in 2013). Honestly, if it simplifies the spec, at this point it seems like a good idea to remove it.
XSLT came across as a little esoteric.
I support the html and browser spec being greatly simplified in general. Makes it easier to develop competing browsers.
But at the same time, people don't want web pages and web apps to become all fully opaque like Flutter web or complex, minified JS-heavy sites. Even the latter have many a11y benefits of markup.
I think that's a tradeoff.
Simplest approach would be to just distribute programs, but the Web is more than that!
Another simple approach would be to have only HTML and CSS, or even only HTML, or something like Markdown, or HTML + a different simple styling language...
and yet nothing of that would offer the features that make web development so widespread as a universal document and application platform.
I think most people just don't care, although the a11y benefits are truely important. HTML isn't going anywhere and often you need JS to make things more accessible.
But like, most people just want a site to work and provide value, save them time etc and the way the site is built is entirely unimportant. I find myself moving towards that side despite being somewhat of a web purist for years.
The vision of XML was a semantic web. Nowadays everybody knows that semantic is 'em' and non-semantic is 'b' or 'i'. This is simple, but wrong. In fact a notation is semantic when you can make all the distinctions you care about and (this is important) do not have to make distinctions you do not care about. In this case every distinction means something and thus is semantic.
How do you apply this to documents? They are so different. XML gives the answer: you INVENT a notation that suits just your case and use it. This way you perfectly solve the enigma of semantic.
OK, fine, but what to do with my invented notation? Nobody understands it. Well, that is OK. You want to render it as HTML; HTML has no idea about you notation, but is (was) also a kind of XML, so you write a transformation from your notation to HTML. Now you want to render it for printed media: here is XSL-FO, go ahead. Or maybe you want to let blind people read your document too; here is (a non-existent) AUDIO-ML, just add a transformation into this format. In fact there could be lots of different notations for different purposes (search, for instance) and they are all within a single transformation step.
And for that transformation we give you a tool: XSLT.
(I remember a piece discussed here; it was about different languages and one of examples of very simple languages was XSLT. It is my impression as well; XSLT is unconventional, but otherwise very simple.)
Of course you do not have to invent a new notation each time. It's equally fine to invent small specific notations and mix them with yours.
For example, imagine a specific chess notation. It allows you to describe positions and a sequence of moves, giving you either a party or a composition. You write about chess and add snippets in this notation. First, it can be very expressive; referring to a position should take no more than:
Given the party is described this can render the whole board. Or you can refer to a sequence of moves: and this can be rendered in any chess move notation.And then imagine a specific search engine that crawls the web, indexes parties and compositions and then can search, for example, for other pages that discuss this party, or for similar positions, or for matching sequences of moves.
XML even had a foundation to incorporate other notations. XML itself is, indeed, verbose (although this can be lessened with a good design, which is rare), but starting from v1.0 it has a way to formally indicate that contents of an element are written in a specific notation. If that direction was followed it could lead to things like:
all in the same document.The vision of XML was federated web. Lots of notations, big and small, evolving and mixing. It was dismissed on the premise it was too strict. I myself think it was too free.
As long as xpath is still there I approve
i thought that HTML spec is immutable.
The HTML spec is actually constantly evolving. New features like the dialog element [0] and popover [1] were added every year. But removing something from the spec is very rare, if it ever happened before.
[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
[1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
The W3C spec was. But WHATWG and HTML5 represent a coup by the dominant browser corporations (read: Google). The biggest browser dictates the "living standard" and the W3C is forced into a descriptivist role.
The W3C's plan was for HTML4 to be replaced by XHTML. What we commonly call HTML5 is the WHATWG "HTML Living Standard."
the old sages in ivory towers handed us a spec engraved in stone and expected is to live by it
no wonder they were sidelined
They weren't sidelined because they had bad ideas (XHTML 2.0 had a lot of great ideas, many of which HTML5 eventually "borrowed"), they were sidelined because they still saw the web as primarily a document platform and Google especially was trying to push it as a larger application platform. It wasn't a battle between the ivory tower and practical concerns, it was a proxy battle in the general war between the web as a place optimized to link between meaningful, accessibility-first documents and the web as a place to host generalized applications with accessibility often an afterthought. (ARIA is great, but ARIA can only do so much, not as much of it by default/a pit of success as XHTML 2.0 once hoped to be.)
The WHATWG HTML spec. is famously mutable. They literally call it a “living standard” and it separates them from the versioned W3C standard.
Yep, doesn't this make certain pages not work anymore?
it will. It will make old non-updated pages break with same fate as old outdated pages which used MathML in the past and were not updated with polyfills.
FYI, MathML is currently shipping (again, after all these years) in Chrome, Firefox, and Safari[1].
[1] https://mathml.igalia.com/
It makes them not work in Chrome. For any application that supports XSLT they'll continue to work fine.
It's immutable in the sense of "only remove stuff after incredibly careful consideration".
Which Chrome has transmuted into "we do whatever we want to do". Remember their attempt to remove confirm/prompt?
Here's the HTML spec: https://chromium.googlesource.com/chromium/
An implementation with >90% market share becomes the defacto standard.
Who else is watching this who grew up watching this same movie play out with Microsoft/IE as the villain and Google as the hero? (Anyone want to make the "live long enough" quote?)
I'm sorry but I don't understand this. If a polyfill can add xslt support then why don't browser vendors ship the polyfill and apply it automatically when necessary?
Please do
Please don’t.
As much as I think XSLT is cool, if it's used by practically nobody and contains real security vulnerabilities... oh well. You can't deny that combination is a good objective reason to remove it.
And browsers are too big with too many features; reducing the scope of what a browser does is good (but not enough by itself to remove a feature).
Maybe one day it will come back as a black-box module running in an appropriate sandbox - like I think Firefox uses for PDF rendering.
Ah, now I am glad I stuck with DSSSL :)
As a reminder for people who love xslt.
Nothing is stopping you from using content negotiation to do it server side.
Ah, the corporate enshittification ensues.
Finally! Remove this shit!
[dead]
[dead]
Good