Skip to content

Add G-RRT* (Greedy RRT*) planner#95

Open
mlsdpk wants to merge 4 commits intoKavrakiLab:mainfrom
mlsdpk:main
Open

Add G-RRT* (Greedy RRT*) planner#95
mlsdpk wants to merge 4 commits intoKavrakiLab:mainfrom
mlsdpk:main

Conversation

@mlsdpk
Copy link
Copy Markdown
Contributor

@mlsdpk mlsdpk commented Mar 10, 2026

Hi everyone! This follows up on #24, where I mentioned adding a bidirectional asymptotically optimal planner as an alternative to the existing AORRTC (G-RRT* is rewiring-based, as it builds on RRT*). It took a bit longer than expected, but here it is! The related publication can be found here.

I haven't updated the README yet - happy to do so once you're on board with including this planner in VAMP.

I tested this on the existing sphere cage problem. You can find some results below for both deterministic and randomized versions. I also included this test script, which might be useful for others to try out and use as a reference for adding new anytime planners.

convergence_deterministic convergence_multi_seed

Copy link
Copy Markdown
Collaborator

@zkingston zkingston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! I've left some comments on the main GRRT* planner, mostly nits. I'll look at the rest later.

{ return buffer.get() + index * Configuration::num_scalars_rounded; };

std::vector<std::size_t> parents(settings.max_samples);
std::vector<float> costs(settings.max_samples, std::numeric_limits<float>::infinity());
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be careful with infinity due to numeric issues with compiler optimizations.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switched to std::numeric_limits::max() to be safe!

}
start_chain.push_back(current);

for (int i = static_cast<int>(start_chain.size()) - 1; i >= 0; --i)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not going to overflow, but is int the right width?

Copy link
Copy Markdown
Contributor Author

@mlsdpk mlsdpk Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right! I just replaced it with reverse iterators.

float cost_delta = costs[nbr_node.index] - nbr_new_cost;

auto &old_parent_children = children[parents[nbr_node.index]];
old_parent_children.erase(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious how efficient this is compared to just marking the old as invalid and extending. Also, potentially reserving space for the neighbors in advance. Not sure how expensive all these operations are.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you're right. The erase-remove was directly copied from the OMPL implementation, but it's O(n) for both find and remove.

I've replaced that with the overwrite-pop pattern - same O(n) find but O(1) removal.

I also looked at the mark-as-invalid pattern you mentioned and micro-benchmarked it against different implementations. Overwrite-pop wins overall (the hash-based mark patterns have expensive insert/lookup costs, and a simple linear scan + O(1) pop wins over hash or vector bookkeeping).

Also added neighbors.reserve(settings.max_samples) following the pattern in aorrtc.hh and fcit.hh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants