The core reason why Anubis affects search engine indexing is the undifferentiated nature of its protection mechanism:
Operating Principle Limitations
- Proof of workload to be completed for all automated visits, including compliant search engine crawlers
- Search engine crawlers such as Googlebot are not equipped to perform computational tasks
- It's an intentional feature of the design, not a technical flaw
prescription
- Important SEO Scenarios: Adoption of alternatives such as Cloudflare that recognize search engines
- hybrid deployment: Enable Anubis only on sensitive routes
- Whitelisting mechanism: Add search engine IP release rules by modifying the code yourself
- robots.txt supplement: While not effective for AI crawlers, it can assist with SEO
The developers make it clear that Anubis is better suited for the following scenarios:
- Internal systems or development test environments
- Identify personal websites that you don't want to be included
- Scenarios requiring precise access control such as file sharing
This answer comes from the articleAnubis: Interfering with AI Crawler Crawling by Proof of WorkloadThe