Most Go scraping stacks end up with the same architecture: a fetch layer (HTTP client), a parsing layer (GoQuery or HTML parsing), and a scheduler that decides what to fetch next and when. Colly is often chosen because it wraps scheduling, callbacks, and request management in a way that scales well, while GoQuery provides jQuery-like DOM selection that simplifies extraction from HTML pages. Rotating proxy endpoints should be treated as part of the fetch layer rather than an afterthought: the proxy decision needs to be deterministic enough to reproduce results, but flexible enough to distribute load and prevent “hot” IPs from degrading success rates. In practice, teams choose between two stable patterns—rotation per request for breadth (useful for discovery and SERP-like collection) and sticky sessions for depth (useful for pagination, carts, logins, and multi-step flows). The best results come from making proxy selection explicit in your job configuration, logging which proxy/region served each response, and building health checks so slow or error-prone exits are temporarily removed from the pool instead of poisoning the whole crawl.