The hyperscalers stopped that timeline from winning, though.
YouTube had atom feeds and I don't think Amazon and Microsoft have relevant syndication.
Meta is surely responsible but that's it, imo.
<feed xmlns:yt="http://www.youtube.com/xml/schemas/2015" xmlns:media="http://search.yahoo.com/mrss/" xmlns="http://www.w3.org/2005/Atom">
I don't think they are linked to anywhere but the url is http://www.youtube.com/feeds/videos.xml?channel_id=<channel_id>They dumped microformats and standards in favor of soupy error tolerant formats that benefitted their search engine and made it harder for other efforts to make information shareable and accessible.
They wanted it to be easy to get information in, but for you to have to go through them to get information out.
I liked Atom's clean design but it felt it was mostly pushed by Google (I may be misremembering) and in the end the syndicated web faded into obscurity anyway.
There's really no good reason to use anything other than Atom.
What do you like about XML? I feel like I'm missing something.
Obviously, that's only a benefit if you care about and utilize those features; most teams doing JSON integrations will just build those into the consumer in lieu of them being provided by the transport. But it is something that some people (especially larger enterprise organizations) value.
In addition, JSON is easier to parse and to map to common data structures of programming languages.
JSON is still figuring it out.
As for DTD: https://en.wikipedia.org/wiki/Document_type_definition
Basically it tells the system what elements are allowed in which places and what attributes they can contain.
<!ELEMENT html (head, body)>
Defines a html element that can contain a head and body, nothing else. Anything extra or missing will fail the validator.It was kinda-sorta eventually superseded by XML Schema that could also define what KIND of data the attributes could contain, but did exist at the top of XML/HTML/SGML documents for years.
In retrospect, its useful for creating islands of sanity/enforcement in a codebase. Lightweight way to give type annotations across organizational boundaries.
> we use an XML parser to parse it to JSON and even then it's not perfect
I can't quite picture this: how does one parse XML to JSON? I assume there's code that's parsing XML and returning a JSON object? What would make this not perfect, other than a poor implementation of the translator? Would them using JSON help? If JSON is a less expressive format than JSON, is it possible to 100% translate their XML to JSON?
Thanks for the insight! Is this what JSDoc/Swagger is now used for?
> I can't quite picture this: how does one parse XML to JSON?
I'm not sure actually. I haven't personally seen the code, I just hear my coworkers always lambasting that API provider for their usage of XML. Maybe it's just their lack of documentation that sucks, but it's become a running joke whenever we get a new partner that the team integrating it jokes that their API is XML.
At the bottom of the article there's, under "See Also", a link to this page comparing RSS and Atom: https://www.intertwingly.net/wiki/pie/Rss20AndAtom10Compared...
It seems like the last update is from 2008, but the section on the differences has a few interesting items. I am not sure if it changed, but it says:
"The RSS 2.0 specification is copyrighted by Harvard University and is frozen. No significant changes can be made (although the specification is under a Creative Commons licence) and it is intended that future work be done under a different name; Atom is one example of such work."
The Wikipedia RSS page has also a small section comparing RSS and Atom: https://en.wikipedia.org/wiki/RSS#RSS_compared_with_Atom
"Technically, Atom has several advantages: less restrictive licensing, IANA-registered MIME type, XML namespace, URI support, RELAX NG support.[35]"
There is an npm package called astrojs-atom but i am not use it is official or safe.
Is there any astro core developer reading this, please add atom option addition to rss.
Some people forged ahead with a cleaned up RDF-based version and called it RSS 1.0, while other people went ahead with the ambiguities but without RDF and called it RSS 2.0. The person publishing RSS 2.0 considered it finished and refused to update it. There was drama.
A bunch of people decided that there was too much to clean up from within that mess and started a new format, Atom. This ended up being a much better spec. with an official RFC, but at this point everybody was calling any type of feed “RSS”, even if it was Atom.
If you have the choice, you should pick Atom.
Pity though. RSS / Atom was a fantastic concept and it’s a real pity big tech killed them off.
Basically, I get to see the latest post from a random feed. Nothing else. No lists of unread new posts from all the feeds. If I like the title and short summary, I click through to the website or blog itself where I can read the whole thing. There's no FOMO this way, or an information overload. Just one post a time.
Because the whole list of feeds is curated by myself, I know that everything is at least a little interesting. I even made a category with Youtube channels that I like, so I can skip their annoying recommended videos algo.
Next to this basic functionality, I made what I call 'Newspapers'. These are certain topics with a bunch of selected feeds attached, they get checked automatically in the background. When the Newspaper has enough articles, I see a new Newspaper appear. Otherwise it might take months before a feed is shown in the random selection.
One 'dream' of me is to have OPML be the discovery-glue between all kinds of individual personal websites and blogs. But this requires critical mass to have enough to discover and explore, and it needs some fun/interesting software way to do that.
Or you create a blog for yourself and you make a blogroll.
As for discovering new blogs, couple of options but there are more out there: https://ooh.directory, https://blogroll.org/