Seeing as HTTP requires GET to be idempotent, and not take any action other
than retrieval, crawlers won't "interact" with well-designed websites if by
"interact" you mean "change stuff".

The RFC uses SHOULD NOT rather than MUST NOT, the consequences of flagging content as inappropriate are *safe* which is the gist of section 9.1.1, but can be annoying if something comes along and flags all your content as inappropriate. This annoyance is an acceptable outcome when the risk has been mitigated by implementing robots.txt (which I recognize is not a standard, but is so widely adopted I wouldn't expect trouble from a place like NLNZ).

As far the GET requests to links such as flagging content being idempotent, no one has said that they aren't - in the context of section 9.1.2 of the RFC, idempotent means that multiple identical requests have no greater side effect than the original request.
 

If the APIs return an appropriate Content-Type and the crawlers still
retrieve them, then the crawlers are either genuinely interested in
indexing the content retrieved by those APIs, or they're buggy and you
should report the issue.

Just because a crawler wants API content doesn't mean that site owners want the crawler to have it. Our APIs return the correct content-type header, yet we started finding XML used to create graphs appearing in search results for us in Google.

Cheers,
Alex