htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext
htmx is small (~14k min.gz’d), dependency-free, extendable, IE11 compatible & has reduced code base sizes by 67% when compared with react
Maybe I’m wildly misunderstanding something, not helped by the fact that I work very little with Web technologies, but…
So, in a RESTful system, you should be able to enter the system through a single URL and, from that point on, all navigation and actions taken within the system should be entirely provided through self-describing hypermedia: through links and forms in HTML, for example. Beyond the entry point, in a proper RESTful system, the API client shouldn’t need any additional information about your API.
This is the source of the incredible flexibility of RESTful systems: since all responses are self describing and encode all the currently available actions available there is no need to worry about, for example, versioning your API! In fact, you don’t even need to document it!
If things change, the hypermedia responses change, and that’s it.
It’s an incredibly flexible and innovative concept for building distributed systems.
Does that mean only humans can interact with a REST system? But then it doesn’t really deserve the qualifier of “application programming interface”.
No, it doesn’t mean only humans can interact with it.
The key point [of classical REST] is that responses are self-contained self-describing. Requesting a resource response tells you what actions you can take on it. There is no need for application domain knowledge, implicitly or separately-explicitly shared knowledge.
Some HTTP web apis offer links in their JSON responses for example. Like previous and next page/ref on paging/sectioning/cursor. Or links to other resources. I don’t think I’ve ever seen possible resource actions/operations be included though. Which is what the original REST would demand.
That’s how I understood it anyway.
Their suggestion of using HTML rather than JSON is mainly driven by their htmx approach, which the project and website is about. Throughout this article though, they always leave open which data form is actually used. In your quoted text they say “for example”. In a later example, they show how JSON with hyperlinks could look like. (But then you need knowledge about that generalized meta structure.)
It feels like he’s trying to say something like Swagger should always be required. One of the things about SOAP for example was that it always had a self-generating WSDL that you could consume to get everything. There were quite a few REST endpoints that were missing this when first developed.
But I do agree that “forms” and “html” are quite the opposite of an API.
Maybe I’m wildly misunderstanding something, not helped by the fact that I work very little with Web technologies, but…
Does that mean only humans can interact with a REST system? But then it doesn’t really deserve the qualifier of “application programming interface”.
No, it doesn’t mean only humans can interact with it.
The key point [of classical REST] is that responses are self-contained self-describing. Requesting a resource response tells you what actions you can take on it. There is no need for application domain knowledge, implicitly or separately-explicitly shared knowledge.
Some HTTP web apis offer links in their JSON responses for example. Like previous and next page/ref on paging/sectioning/cursor. Or links to other resources. I don’t think I’ve ever seen possible resource actions/operations be included though. Which is what the original REST would demand.
That’s how I understood it anyway.
Their suggestion of using HTML rather than JSON is mainly driven by their htmx approach, which the project and website is about. Throughout this article though, they always leave open which data form is actually used. In your quoted text they say “for example”. In a later example, they show how JSON with hyperlinks could look like. (But then you need knowledge about that generalized meta structure.)
The author actually agrees with this take, and even links to this post making it explicit: https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.html
It feels like he’s trying to say something like Swagger should always be required. One of the things about SOAP for example was that it always had a self-generating WSDL that you could consume to get everything. There were quite a few REST endpoints that were missing this when first developed.
But I do agree that “forms” and “html” are quite the opposite of an API.
Well I’m not missing the point then, that’s good to know :)