The curious case of pagination for Gremlin queries

Why is pagination a hard problem for TinkerPop graph databases? What can we do as application developers? What else can we achieve as a by-product?

Jayanta Mondal
10 min readOct 16, 2018

If you have looked for efficient solutions to paginate the results of your Gremlin queries, you may have stumbled upon this post on StackOverflow, however, perhaps only to realize that the pagination support is a difficult problem to solve for most TinkerPop-enabled graph databases. Here, I am referring to the response by Stephen who is one of the most prolific contributors to the graph technology community. Now, while there doesn’t appear to be any silver bullet for the problem, as application developers, we can do much better by shouldering some of the responsibilities ourselves.

TLDR: Supporting efficient pagination on generic graph queries is almost an impractical problem to solve for graph database providers. In fact, even if pagination was supported in graph databases, it may not have provided the client applications with much benefit, at least with respect to latency and cost. The reason is that the amount of state that needs to be maintained by the databases to efficiently retrieve a subsequent page can be arbitrarily complex. Moreover, the state itself can be very expensive to derive. In this situation though, as users of the graph databases, we can devise relatively efficient client-side pagination. The trick is to bucketize the traversal-scope (or exploration-scope) of a Gremlin query, and then run the traversal for each of the buckets to generate the pages. A minor downside is that, depending on the complexity of a query, the size of pages generated by these buckets may not be precisely known ahead of time. On the other hand, a major upside that one gets is somewhat automatic rate-limiting, specifically in the context of throughput-provisioned cloud databases.

Why pagination support is hard for the TinkerPop Graph Databases?

Let’s first look at the general requirements for “pagination” support, and then we will analyze the difficulties faced by the graph databases to implement such support.

Pagination Requirement 1. Given a query, a client application may request the results, one page at a time. The most common definitions of a page are: (a) a fixed count of results, (b) a fixed size (in bytes) of the results. Also, note that, there is typically no time limit within which a subsequent page can be requested.

Pagination Requirement 2. The client application may request to skip multiple pages and fetch the following page.

Supporting Pagination Requirement 1: Fetch one page at a time

To return results one page at a time, a database needs to maintain some state, which can later be used to resume the computation for the next page.

In the relational world, such state management is relatively simple. For example, for “Select * from Table” queries, the state can be a pointer to the most recent row that has been returned to the client. With more complex SQL, the state can be a bit more elaborate, but nonetheless very small in size (think bytes). Also, because of the small size of the state, it’s much easier to reload the context and resume the computation for the next page.

However, for graph queries, the state that needs to be managed can be arbitrarily complex and large. This is primarily due to the unstructured and often iterative nature of the graph traversals. To understand this argument, let’s look at an example. Let’s say that the query we need to paginate is: g.V(A).out().out(), with a requested page size of 5.

The figure 1 and figure 2 show the parts of the graph that was visited during the computation of the 1st page and 2nd page respectively, and what state we needed to store for computing their following pages.

Figure 1: State of the graph when the 1st page was computed. Note that, all of C’s neighbors were visited before all of B’s neighbors. This indicates that B and C’s neighbors were accessed in parallel.
Figure 2: State of the graph when page 2 was generated. Note that, even though A, B, C, and D doesn’t have any more neighbors to visit, we sill need to keep a pointer for their next edges, for the computation of 3rd page. We could have removed those state, but for that, we needed to re-traverse the graph and check that they indeed didn’t have any more neighbors. So it doesn’t matter when we pay that cost, whether at the end of page 2 or at the beginning of page 3. The beginning of page 3 is preferred, as page 3 may not be fetched at all.

Some of the key observations here are:

Observation 1: The state grows linearly w.r.t. to the number of non-result vertices (or intermediate vertices) visited during a traversal. While this conclusion seems to be very specific to this example, I like to argue that it’s true in general as well. While I didn’t considered deriving a formal proof, the basic intuition lies in the following analogy.

In relational world, “tables are first class citizens” and queries are written by directly referring the tables. We can’t really refer a “row” or “column” directly, and they have to be accessed by applying filtering conditions on tables. This limits the amount of the state one needs to be maintained. For example, when we write a query joining two tables, it is enough to keep one pointer for each table to compute the next page of the join. In other word, the state is roughly proportional to the number of first class citizens referred in the query.

However, in the world of graph databases, “the vertices are first class citizens” (in TinkerPop, edges and properties are as well), and one can directly refer them in the graph queries. Now, analogous to the relational world, when we execute a graph traversal we are effectively conducting a complex join among the vertices that take part in the traversal. In the general case, to support pagination, we need to store some state for each of the participating first class citizen, as each of them may contribute to the final output during the computation of the next page.

When we execute a graph traversal we are effectively conducting a complex join among the vertices that take part in the traversal.

Observation 2: Generating the state for computing the next page is roughly equivalent to running the traversal again. During a graph traversal, when a page worth of results has been generated, we need to compute the state, which can then be be used in the next phase. However, generating the state is non-trivial. We would need to unwind the operator stack to see at state the participating vertices are. Not only that, the state information need to be mapped to the operators in the execution tree. This is because, depending on the complexity of the graph, same node can appear in the same state via multiple paths.

Observation 3: The example shown above is really a very basic graph query on a very structurally well-behaved graph (in this case a tree). Gremlin offers 60 odds different steps including advance constructs like repeat..until, choose (effectively if-then-else), order by, group by, map, fold etc. With all these steps the state management is going to get arbitrarily complex.

Observation 4: In this specific case, some optimization can be done to reduce the pagination state, but that won’t work in the general case. For the given query, we can perform a depth-first traversal and make sure that the edges of a vertices are fetched in a fixed order, to reduce the state. However, this approach will limit parallelism, and eliminate batching opportunities (i.e., executing same step() for multiple vertices at the same time).

While this discussion can go on longer, I think the points discussed so far provide enough material for us to think why supporting pagination in a graph database is a complex beast. More importantly, even if the graph databases supported pagination, it doesn’t seem to be case that it would have saved anything for the client application, both in terms of latency and cost.

Supporting Pagination Requirement 2: Skip pages

Needless to say that this requirement is even more demanding. In this case, keeping around a pointer to the last result is not sufficient. The database needs the ability to efficiently compute the start pointer of a future page.

Again for simple, “Select * from table” queries the computation involves calculating the number of rows that needs to skipped and how to get to the row that will need to be the part of the result.

For graph queries, the starting point of a future page is impossible to compute without actually executing the query and discarding all the results corresponding to the skipped pages. So, supporting this requirement for a graph database won’t save the client application anything.

This summarizes our discussion on the challenges for supporting pagination in graph databases.

PS: In this discussion, we are assuming that computing all the results of a query and storing the results for arbitrarily long period of time, is not a viable solution. Resource governance of supporting such an approach would be very cumbersome. Moreover, the customer would pay a lot of unnecessary cost, especially when only few among the large number of pages are fetched.

What can we do as application developers?

Once we have accepted the fact that, efficient support for native pagination may not be a top priority for the graph database vendors, and for good reasons, designing effective workaround is a reasonably manageable task.

In the rest of this section, I will discuss, how I have approached this problem while working with Azure Cosmos DB’s Graph API, and I highly believe that the general philosophy is applicable for other graph database vendors.

Divide and Conquer the pagination problem:

As mentioned before, the idea is to divide the input space of a gremlin query into multiple buckets. Let’s first look at an example. Let’s say we want to paginate the query g.V(). As we can understand, running a g.V() query without pagination support can be problematic. Now, the query can be segmented as the following 36 queries.

g.V().has(‘id’, gte(‘0’)).has(‘id’, lt(‘1’))

g.V().has(‘id’, gte(‘1’)).has(‘id’, lt(‘2’))

…………………………………………

g.V().has(‘id’, gte(‘9’)).has(‘id’, lt(‘A’))

g.V().has(‘id’, gte(‘A’)).has(‘id’, lt(‘B’))

…………………………………………

g.V().has(‘id’, gte(‘Y’)).has(‘id’, lt(‘Z’))

g.V().has(‘id’, gte(‘Z’))

Key point 1: Here the assumption is that each vertex has a string Guid. A Guid can only start with [0–9] or [A-Z]. So, this naturally gives you 36 pages. Azure Cosmos DB can push down these has() conditions down to the database level, making each of these queries efficient.

Key point 2: Note that, if you need to increase or decrease the number of pages, you can always to that by splitting or merging these ranges. For example,

has(‘id’, gte(‘0’)).has(‘id’, lt(‘1’)) can be split into the following two ranges:

  1. has(‘id’, gte(‘0’)).has(‘id’, lt(‘0m’))
  2. has(‘id’, gte(‘0m’)).has(‘id’, lt(‘1’))

Similarly,

g.V().has(‘id’, gte(‘0’)).has(‘id’, lt(‘1’)) and g.V().has(‘id’, gte(‘1’)).has(‘id’, lt(‘2’)) can be merged into:

  1. g.V().has(‘id’, gte(‘0’)).has(‘id’, lt(‘2’))

Key point 3: These splitting and merging can be done dynamically as well. Once the first page is fetched, the page size of the subsequent pages can be adjusted using splitting/merging. Of course, the assumption here is that, the Guids are even distributed by their first character, which I found to be true in general.

Key Point 4: Even though this example uses ‘id’ of a vertex for pagination, you can chose any other properties (or a combination of multiple properties). Any property that can be used to reasonably divide the inputs into groups can be used of this purpose. As an application developer, you will be the best person to make that decision.

How to pick the right step to apply the pagination ?

In the previous example, it is obvious that the pagination need to be applied after the V() step. But, let’s take another example:

g.V(v1).out().out()

In this case, the V(v1) step will only output one vertex, hence applying pagination after that is useless. However, applying pagination after the first out() step makes a lot of sense. In fact, if you go back to the Figure 1, you will see that, if we had split the g.V(v1).out() into two groups consisting of {A, B} and {C, D}, things would have worked out for the better. The queries would have been:

g.V(v1).out().has(‘id’, gte(‘A’)).has(‘id’, lte(‘B’)).out()

g.V(v1).out().has(‘id’, gte(‘C’)).has(‘id’, lte(‘D’)).out()

Key point 1: Note that the sizes of the returned page would not be uniform. The first query would return a page of size 5, and the second query would return a page of size 3. However, in my opinion, that’s a minor inconvenience compared to not having pagination support at all.

Key point 2: You pay the cost of the g.V(v1) once per page, and twice in this case. But again, this is a negligible cost, compared to cost if the database had supported pagination, for the reasons discussed in the previous section.

Key point 3: It is possible that you may need to apply pagination after the second out(), but that will only make sense when the first out() generate very few results, and second out() generate a large number of results.

This brings an interesting point about the balance between:

  1. how much cost the pagination is going to save you and
  2. how much of the query (or how many steps) you have to execute once per page

Of course, the goal here is to keep the part that needs to be executed multiple times at the minimum. At a high level, the guideline is to apply pagination at point where the query starts to branch out leading to a large number of results.

Tackle RequestRateTooLarge error as a by-product, on Azure Cosmos DB Graph API?

This section is primarily dedicated to Azure Cosmos DB Graph API. Azure Cosmos DB follows a provisioned throughput-based model, which means that if you exceed the provisioned request rate, you can get throttled by the server (aka RequestRateTooLarge error).

With non-paginated queries, say g.V() or g.E(), the database doesn’t know when to stop, and will try to fetch all the vertices or edges from the graph as fast as possible to compose a response to the client. With this though, there is a high chance of hitting the RequestRateTooLarge error, especially if you have large of vertices/edges coupled with insufficient provisioned throughput.

Applying pagination to such queries, automatically introduces pauses, thereby avoiding the RequestRateTooLarge error even with the low provisioned throughput. One can even shorten the page size, or introduce delays among paginated calls to the database, if they need to lower the provisioned throughput even further.

--

--

Jayanta Mondal

These opinions are my own and not the views of my employer (Microsoft).