-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathPathfinding.txt
More file actions
226 lines (203 loc) · 9.24 KB
/
Pathfinding.txt
File metadata and controls
226 lines (203 loc) · 9.24 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
Implementation notes:
d -> max distance
Breadth First:
Does not need priority queue!!!
Hashset implementation:
- Uses (optimal) hashset to track visited nodes and prevent revisit
- O(1) membership operation
- O(d^2) memory
Simple Queue implementation:
- Uses a simple queue to track visited nodes and prevent revisit
- O(kd) membership operation
- O(kd) memory
Only works on uniform cost grids !!!
Should be templatable hashset vs queue
Dijkstra's (or really UCS):
Uniform cost grid:
Equivalent to BFS -> Just use BFS
Integer cost grid:
Bucket/HOT queue or binary heap (w/ decrease-key)
Floating cost grid:
Binary heap (w/ decrease-key)
Avoiding decrease-key:
Just add a duplicate node at the lower priority
Increases memory usage, but almost always faster
Should be a templatable option
A*:
Uniform cost grid:
Heuristic: Manhattan/chebyshev distance
Priority Queue:
Requires priority queue -> doesn't expand predictably in same way as BFS
Does not require decrease-key -> never finds a faster path than already found (if heuristic is admissible)
Priorities:
Minimum priority is manhattan/chebyshev between start and end
Maximum priority is d
TODO consider how bucket size might work -> not necessarily same as bfs/dijkstra
Bucket queue with array containing LocalStacks or LocalHeaps (because no requirement for decrease-key)
Integer cost grid:
Heuristic: Manhattan/chebyshev distance
TODO consider how bucket size might work -> not necessarily same as bfs/dijkstra
If a bucket queue is used:
array/ULL containing singly-linked list
Only ever needs pop_front xor pop_back within all buckets of a given bucket queue
Determines bfs or dfs priority tiebreak behaviour
swapping pop_front and pop_back just require changing link direction (templatable)
-> Never needs a doubly-linked list
See dijkstra
Floating cost grid:
Heuristic: Euclidean distance
See dijkstra
Priority queues:
All should be templatable to use or not use a (optimal) hashset for membership
Bucket Queue:
Monotonic -> track most recent extract-min bucket
Uniform cost grid:
Should be templatable to use LocalStack vs LocalQueue
Integer cost grid:
Should be templatable to use a (optimal) hashMAP of node pointers for decrease-key
Binary heap:
Should be templatable to use ULL vs (non-dynamic) array representation
Linked List notes:
Should be templatable to use pop_front (forward linked) vs pop_back (doubly-linked)
Old notes:
Breadth First Search:
Unweighted grids
d -> max distance
O(d) memory complexity & o(d^2 * log(d)) time complexity:
Keep 3 sets -> AVL trees | (frontier, prev frontier, prev prev frontier)
BFS will never try to search beyond those 3 sets (I think)
4(d)+4(d-1)+4(d-2) = number of grid addresses to store (plus a bit for AVL tree non-completeness)
8 for queen's case
Visits up to ~d^2 grid cells -> at least O(d^2) time complexity
Must check if a grid cell was already visited:
O(log(d)) per cell!
Must add each grid cell to visited sets:
O(log(d))
CAN'T RETURN FULL PATH -> only returns up to first 2 optimal moves to go toward goal
CAN'T RETURN FLOW FIELD -> Forgets as frontier expands
O(d^2) memory complexity & o(d^2) time complexity:
Keep 1 queue -> frontier
Keep 1 set -> hashmap (visited)
2d^2 = number of grid cells to store
4d^2 for queen's case
Visits up to ~d^2 grid cells -> at least O(d^2) time complexity
Must check if a grid cell was already visited:
O(1) per cell
CAN RETURN FULL PATH -> retraverse grid backwards when goal found
Size of path will be <= d
CAN RETURN FLOW FIELD
Further considerations:
Max memory requirement decreases if max potential search goes off the grid
TODO add something to take this into account and reduce memory requirements accordingly
A* Search:
Weighted or unweighted grids
d -> max distance
Worst case equivalent to BFS, but also includes a priority queue
If weights are >=1, then this still holds for weighted grids
Priority Queue:
Probably still max 4d grid cells (or 8d for queen's case)
If weights are >=1, then this still holds for weighted grids, I think
Consider 'Bucket Queue':
Requires integer weights & integer heuristic
Weight+heuristic is index within an array
Each index points to a stack
Array is max length d (or c*d if there's a constant multiplying the heuristic above the real distance)
O(1) insert -> adding to frontier
O(d) remove-best -> popping from frontier (must determine the second lowest bucket if the current lowest bucket empties)
Ends up being O(1) average time
O(d^2) space complexity (rook's case) -> whole edge of search iteration could be same distance (d/4), and we need (d) buckets
Using the depth-first style via a stack in the buckets might reduce this to linear space in (almost) all cases. Need to test
O(d) space complexity (queen's case) -> I'm pretty sure you will (almost) always have a small/limited number of options of the same distance in the frontier at once
Maybe try unrolled linked-list of size 8, then do some tests to choose a good size empirically
Alternative is 'Binary Heap':
Works for non-integer weights &| heuristic
O(log(n)) insert -> adding to frontier
O(log(n)) remove best -> popping from frontier
O(d) space complexity -> 4d for rook, 8d for queen
Dijkstra Matrix:
2d^2 (rook) or 4d^2 (queen) cell distances/parents to store
If weights are >=1, then this still holds for weighted grids
No need to specifically implement Dijkstra's -> Just use a heuristic of 0
Early Stopping:
When goal reached -> early stop and traverse back from goal to give best path
When max distance exceeded -> early stop, find node with best w+h, traverse back to give greedy best path
When priority queue emprty -> same as above (I think this might happen if you try and path into a building with no entrance)
No good tradeoffs -> always at least O(d^2) space and O(d^2) time for the worst case performance
Jump Point Search:
Unweighted Grids
d -> max distance
Worst Case:
Dijkstra matrix same size as A*
Frontier queue half the size of A*
Average Case:
Reduces number of 'visited' checks
Reducts number of operations & size of priority queue
Could potentially save some memory in most cases by using unrolled linked lists in the queue
Can still make Dijkstra matrices
Tends to find straight paths -> look nicer
Default rules work for queens case, but can also work for rooks case by just orthogonalising the final path
JPSW:
Weighted Grids
Look at Monash Paper from 2023 that introduces it
JPS+:
Slowly changing grid
Applied on top of JPS
Algorithm:
Visits each square, determines if it's a jump point and from what directions
From each jump point, iterates away from it in its direction, and assigns a distance to that nearest jump point
Resets count on hitting another jump point in that same direction
O(n) added space complexity -> Needs 4 (rooks) or 8 (queens) values per grid cell
O(n) added time complexity -> Must iterate over entire grid
Slower than raw JPS unless a lot of JPS calls happen between recomputes
Should be possible to partially/incrementally recompute when terrain changes (i.e. not just fully recompute):
Probably not worth just for enemy movement, but possibly worth when environment changes
Goal Bounding:
Unchanging grid
Applied on top of ANY grid pathfinding
Consider circular and rectangular goal bounding and test
Algorithm:
Visits each square
Calculates a Dijkstra matrix for each neighbour of that square
Assigns an 'optimal bound' to each neighbour
Bounding polygon
Encomposes all square in search range that can be optimally reached by going through that square from the visited square
O(n) space complexity -> needs a polygon polygon perimeter point list for each edge of each node
4 values per list for a square, 2 for a circle
O(n^2) time complexity -> runs Dijkstra/A* starting from every square in grid
Rectangular Symmetry Reduction:
Unchanging weighted or unweighted grids
d -> max distance
TODO
HAA*:
(Heirarchical Annotated A*)
Unchanging weighted or unweighted grids
Larger unit sizes
-> (Or we could just use regular A* [not JPS] with collision checks on each node -> slower but probably fine if most units aren't large)
d -> max distance
TODO
Heirarchical Searches:
Applied on top of ANY grid pathfinding
TODO
Relaxing search:
Algorithm:
Repeat:
Check for precomputations, fallback to no precomputation
Start with more performant search
Check final path to see if assumption violated
Fallback to slower search
For uniform cost grid:
Goal Bounding -> No goalbounding
JPS+ -> JPS -> done
For non-uniform cost grid:
Goal Bounding -> No goalbounding
JPS+ -> JPS -> RSR -> A* -> done
For large units:
Goal Bounding -> no goalbounding
HAA* -> A* -> done
Near-optimal and faster -> Path splicing:
Try fast method e.g. JPS+
Check for violation in range e.g. goblin blocking path in 5 steps
Perform JPS/A* with small max distance e.g. 10, with goal being just to get back on path
Will walk around goblin and continue the initial optimal path
Will stop at a door and wait for you to open it
Maybe add door locations to rooms so we can iteratively check all of them for optimal paths