Beyond Web Feature Services Part 2

Static Features on the Web — WFS without the Query

In my last post on this topic I put forth two major aspects of WFS (broadly defined) that I’m excited about. The first was how WFS 3.0 to me is less a new iteration of a ‘service’ and more about ushering granular geospatial API patterns that others can easily adopt. This post is about the other aspect, which is about better exposing geospatial content and better integrating it with the web. Those who have followed my postings on SpatioTemporal Asset Catalog will recognize many of the same themes from my Static Catalogs post. Indeed I hope the STAC API can be compatible with WFS 3.0, which naturally begs the question of what the WFS equivalent of Static Catalogs is?

Web Features without the Service

It would be roughly the same as a WFS 3.0, except it would lack the ability to respond to any query other than ‘list all your data’. But wait, you say, isn’t that the main thing a WFS really does? I’d argue that a WFS does two things — it makes the source geospatial data available and it offers an API to make specific queries of the data. But I believe a major ‘bug’ of much of the work the OGC has done is that those two things are tied together in a single interface. If you want to make feature data available in a standard way in the OGC world then you need to stand up a WFS. Standing up a reliable, scalable, secure service that properly indexes the data for search is actually quite a bit of work, and expensive.

Image for post
Image for post
Shapefile download on FTP or Web — still the main way to get TIGER/Line data

That is why a lot of data is still only available as a Shapefile on a website, or often just not available at all. It’s just too much of a pain to stand up a ‘service’. But it is potentially much easier to expose data available as ‘Web Features’ — data online that follows the Spatial Data for the Web best practices. It would have lots of links, exist in canonical referencable locations, always link back to its layer level metadata, and be indexable by search engines. And it would be possible to create these data collections by simply placing flat files that link to one another on S3, Google Cloud Storage or any static web server.

Why is this WFS?

So what would this actually look like? The center would likely be a JSON file that is the same as one in a WFS API capabilities, in OpenAPI. But it would not support any query operation, except for ‘GET’ (everything) for various ‘features’ endpoints that contain feature collections. These would be HTML or json pages linked together, one for each feature. And there would be optional formats, that could also be static — a single geopackage or large GeoJSON file for those who want to download the whole dataset without having to crawl through the entire HTML. Ideally features would have lots of links to other features, so they are not just isolated bits on the web but part of deeper object linkings. And eventually it’d be great to align with the linked data initiatives and have good microservices and schema definitions for each feature. But I think it’s important we don’t require full validating schemas as table stakes to put one’s feature data on the web.

Some (non-standard) real world examples

Image for post
Image for post

This probably looks fancier than people might imagine, but all of it can actually be implemented in an entirely ‘static’ manner, as just html pages and geojson written on a web service (though calling to remote tile servers for the base layers). From the parcel page I can view all the information about the property, look at a map of it, and jump to other properties from that map. I can even follow a link to view the full data set of RM-1-1 zones on a map, and then dive back in to other parcels from there.

The Open Data world centered around CKAN and DKAN has also been experimenting with similar ideas, like the Interra Data Open Data Catalog Generator:

Image for post
Image for post

Bridging Static and Dynamic with ‘Sync’

There was work done on this in WFS Synchronization extensions, and perhaps some of that can be used. But I think it can be an even simpler core, that sends a notification if any file has changed. Indeed it should aim to be compatible with things like Amazon’s S3, that sends notifications to their Simple Notification Service when files have changed. Or perhaps even simpler — just a recommendation on proper use of cache-control headers and a response to get all data updated since last checked, use client side pulls instead of server side pushes. Any downstream service should be able to simply subscribe and then update their cache whenever is convenient, guaranteeing that their API always has the latest.

Static STAC & Simple Features for the Web

Image for post
Image for post

Product Architect @ Planet, Board Member @ Open Geospatial Consortium, Technical Fellow @ Radiant.Earth

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store