Depth limit alone often does a poor job at ensuring the simulation
doesn't take too long, as the amount of branches may differ depending on
the game and in some cases, the function can take way too long.
This solution introduces another stop condition, based on the runtime
of the evaluation, ensuring we don't block the game for too long.
Note that the original depth limiting, while fairly effective is a hacky
solution, instead, it may be a good idea to change the simulation logic
from DFS to BFS based search.
The original implementation didn't use references for the QList
instances, which meant they were getting copied, so the changes made
didn't mutate the actual values held by the class.
The original approach for calculating winnability first checked for
inter-column movements, which isn't ideal, instead, the new logic now
first attempts to make foundation pile movements. Additionally, this
converts the function to return an option value, which will be null if
the winnability check fails to determine the result within given maximum
depth. This is a necessary check, as the original logic took really long
to finish, especially if ran at the beginning of the game, where it
could keep going for hundreds of moves.
QML doesn't have a proper type-safe generic list type, returning QList
instances does technically work, however, qmlls (LSP) complains about
using this as it isn't a proper QML type. Instead, return QVarianList
objects, that are meant for QML.
This makes cmake export the qmlls.ini settings for qmlls (lang server),
which makes it aware of the C++ components used in the project. By
default, qmlls wasn't able to find these as they lived in a separate
build dir which it didn't know about.
Additionally, this also enables exporting compile commands to provide
better support for other editors that rely on these. Note that the
compile_commands.json will still only be exported into the build dir
though, so to make use of this, you'll probably want to symlink this
from the build dir.