Building PowerShell scripts with thoughtful proxy configuration starts by accepting that there are multiple layers at which Windows can apply proxy settings—system-wide WinHTTP, user-level WinINET, environment variables, module-specific options—and that relying on whichever one happens to be set this week is a recipe for brittle automation. A disciplined approach defines where responsibility sits: network and security teams own the upstream proxy endpoints and routing policies, while automation engineers decide, per script and per task, which of those endpoints to use and when. In practice, that means centralising proxy URLs, credentials and bypass rules in configuration files, secure stores or DSC/Intune policies, then having scripts read and apply them explicitly instead of hard-coding values directly into Invoke-WebRequest calls. For simple HTTP tasks, parameters like -Proxy, -ProxyCredential and -NoProxy are attached to cmdlets in a consistent pattern so that any new function you write inherits the same behaviour. For more complex scenarios, you may construct a [System.Net.Http.HttpClient] with a custom HttpClientHandler wired to the enterprise proxy, then share that client across modules to avoid duplicating connection setup and TLS negotiation. When scripts run as scheduled tasks, in Azure Automation runbooks or under service accounts, the same configuration model applies: the identity under which the job runs has access to stored proxy settings, and the script knows how to retrieve and apply them before making outbound calls. Logging is part of the design from day one: every automation workflow logs which proxy profile it used, which endpoints it contacted and how many requests succeeded or failed, so that network teams can correlate PowerShell jobs with proxy telemetry and troubleshoot issues without spelunking through dozens of different script versions on individual servers.