Project

Profile

Help

Bug #4865

map:merge() optimiization opportunities

Added by Michael Kay 9 months ago. Updated about 2 months ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Performance
Sprint/Milestone:
-
Start date:
2020-12-24
Due date:
% Done:

100%

Estimated time:
Legacy ID:
Applies to branch:
10, trunk
Fix Committed on Branch:
10, trunk
Fixed in Maintenance Release:

Description

map:merge() is very inefficient if given two maps where the first is a singleton.

See https://github.com/eXist-db/exist/issues/3682 example 6.

Using map:put() here cuts the execution time from 9m to 90ms.

Reversing the order of the maps would probably be equally effective; it would be worth starting with the largest map as the baseline.

So we could do a static optimization (rewrite as map:put) for the case where we can see that one of the maps is a singleton, or we could do a dynamic optimization by re-ordering the maps.

History

#1 Updated by Michael Kay 9 months ago

Unfortunately it's greatly complicated by the fact that the order of maps in map:merge() is significant. If we change the order of maps, we need to adjust the duplicates option accordingly.

#2 Updated by Michael Kay 9 months ago

Dimitre Novachev reports on Slack that this version of the query is also very slow:

let $contains := Q{http://www.w3.org/2005/xpath-functions/map}contains#2,
    $put := Q{http://www.w3.org/2005/xpath-functions/map}put#3
  return
    fold-left((1 to 50000, 1),
               map{'distinct' : map{}, 'dups' : map{}},
               function($accum as map(*), $current as xs:integer)
               { 
                if ($contains($accum?distinct, $current))
                 then $put($accum, 'dups', $put($accum?dups, $current, ()))
                 else $put($accum, 'distinct', 
                                    $put($accum?distinct, $current, ()))
                }
        ) ?dups

It seems to be spending its time determining the item type of the map.

It comes down to 66ms if you replace the dynamic function calls by static function calls on map:contains and map:put. The code is spending all its time doing type-checking on the content of the maps, which in some cases requires scanning all the entries in the map; it seems we're able to avoid this when we know the function signatures statically. There are lots of potential opportunities for improving this: (a) statically rewriting the dynamic function calls as dynamic function calls (b) retaining type information about maps through map:put() rather than recomputing it, (c) finding out why we're determining the map type when there's nothing obvious in the query that requires it.

#3 Updated by Michael Kay 9 months ago

The dynamic function call triggers dynamic evaluation of the function conversion rules (TypeHierarchy.applyFunctionConversion()) which calls getItemType on the supplied map, even though the required type allows any map. Better to test "if (!requireType.matches(suppliedValue))" first.

Confirmed that this is sufficient to solve the problem.

Generally, calling getItemType() is bad news and it is rarely needed.

#4 Updated by Michael Kay 9 months ago

It is also reported that this version of the query fails with a StackOverflow error

declare namespace map =
    "http://www.w3.org/2005/xpath-functions/map";
let $seq := (1, 1 to 500000, 1, 2)
let $counts := fold-left($seq, map { }, function($sofar, $this) {
  map:put($sofar, $this, head(($sofar($this) + 1, 1)))
}) return
  map:for-each($counts, function($key, $count) {
    $key[$count > 1]
  })

#5 Updated by Michael Kay 9 months ago

The problem with the Chain is not (as I suspected) that it has become unbalanced, but rather than it contains zero-length entries (50,000 of them, as it happens).

This problem occurs when an iteration of map:for-each returns an empty sequence. Fixing map:for-each to avoid adding empty sequence results to the Chain is easy; but we should probably fix Chain.iterate() so it can handle this situation as well.

In fact the problem with the Chain is that it contains a single very long segment, and we are recursing to process the elements of this segment; We avoid recursion for a Chain containing many segments, but not for a Chain containing one very long segment.

#6 Updated by Michael Kay 9 months ago

  • Fix Committed on Branch 10, trunk added

The problem with Chain is now fixed on the v10 and v11 branches.

#7 Updated by Michael Kay 9 months ago

Returning to the original problem with map:merge().

I'm attempting to change the two-way merge so we always merge the smaller map into the larger.

This reveals the problem that a HashTrieMap created using addEntry has an unknown size, so the size is computed by scanning the map, which takes O(n) time.

I have experimented with providing a size() method on the underlying ImmutableHashMap class, however this too operates in linear time, especially as the nodes of the tree representing an ImmutableHashMap are linked lists which compute size() by scanning the list.

Maintaining the map size at the HashTrie level through calls on addEntry() and remove() solves the problem, though it increases the cost of addEntry because we need to check whether the value is already present, which involves a separate call on get().

#8 Updated by Michael Kay 9 months ago

  • Status changed from New to Resolved
  • Priority changed from Low to Normal
  • Applies to branch 10, trunk added

Changes complete. To summarise:

(a) map:size() is now faster because the size is maintained through calls of map:put(), though this incurs a small extra cost on map:put().

(b) map:merge() performs a sequence of two-way merges, merging maps 1 and 2, then merging the result with 3, etc. For each two-way merge, the smaller map is now merged into the larger, giving a substantial improvement when the first map is very small and the second is very large

(c) the dynamic version of the function conversion rules (used for dynamic function calls) is now a lot more efficient when an argument needs no conversion, especially in the case where it is a map or array.

(d) a Chain that contains empty segments is now handled more efficiently.

#9 Updated by Michael Kay 5 months ago

The change to Chain.java was found to have adverse effects on the tests in TestSharedAppend, so it has been reverted. The other changes remain.

#10 Updated by O'Neil Delpratt 5 months ago

  • Status changed from Resolved to Closed
  • % Done changed from 0 to 100
  • Fixed in Maintenance Release 10.5 added

Bug fix applied to Saxon 10.5 maintenance release.

#11 Updated by Michael Kay about 2 months ago

Note that the Chain class has been redesigned for Saxon 11 - replaced by ZenoChain which keeps the lengths of segments close to a logarithmic sequence under append() operations.

Please register to edit this issue

Also available in: Atom PDF