Status Update
Comments
j4...@gmail.com <j4...@gmail.com> #2
You can control the traffic based on the request/response headers of the Target proxies [1] [2]. However, there is already a feature request to allow this directly on the LB, but I can't provide you with an ETA or guarantee its implementation.
Another way of doing it, will be by creating an internal Load Balancer [3], and have a VPN tunnel from the office to your instances. This will guarantee a secure connection, and the load will be balanced among your instances but it might add extra cost.
Keep in mind that the internal Load Balancer is still in Alpha and is not recommended in production.
Should you have any other Feature requests that you would like them to be implemented for a certain use case, please do not hesitate to open a new thread. We will be more than happy to assist in any way we can.
Any updates about this feature will be posted here as well.
Sincerely,
George
[1]:
[2]:
[3]:
eb...@gmail.com <eb...@gmail.com> #3
ke...@gmail.com <ke...@gmail.com> #4
go...@gmail.com <go...@gmail.com> #5
ya...@gmail.com <ya...@gmail.com> #6
Seeing as the Firewall Rules that are currently in place apply after the IP address has been changed by the load balancer, there is nowhere to kill the "bad" IP addresses.
cc...@gmail.com <cc...@gmail.com> #7
And bump -- no ability to whitelist/restrict ips. It is really a show stopper for us. Other option for us could be creating TCP load balancer and manually terminate ssl and manually manage ssl certs via kubernetes secrets store. It really feels like workaround and it is not intended to work that way. Please implement this feature asap, we cannot go to prod without it.
gl...@gmail.com <gl...@gmail.com> #8
ac...@gmail.com <ac...@gmail.com> #9
jo...@gmail.com <jo...@gmail.com> #10
st...@gmail.com <st...@gmail.com> #12
Scripting compute instance operations is beautifully straightforward in GCP - a lot less hassle than AWS Cloudformation can be, but AWS load balancers have had the ability to apply a security group to an ELB for some time. .
gl...@gmail.com <gl...@gmail.com> #13
gl...@gmail.com <gl...@gmail.com> #14
va...@gmail.com <va...@gmail.com> #16
jo...@gmail.com <jo...@gmail.com> #17
gl...@gmail.com <gl...@gmail.com> #18
jo...@gmail.com <jo...@gmail.com> #19
ve...@gmail.com <ve...@gmail.com> #20
ha...@gmail.com <ha...@gmail.com> #21
dd...@gmail.com <dd...@gmail.com> #22
ad...@gmail.com <ad...@gmail.com> #23
ke...@gmail.com <ke...@gmail.com> #24
fe...@gmail.com <fe...@gmail.com> #25
ni...@zxgen.net <ni...@zxgen.net> #26
ro...@hochi.at <ro...@hochi.at> #27
ll...@gmail.com <ll...@gmail.com> #28
ll...@gmail.com <ll...@gmail.com> #29
mt...@gmail.com <mt...@gmail.com> #30
sh...@gmail.com <sh...@gmail.com> #31
wu...@gmail.com <wu...@gmail.com> #32
s....@gmail.com <s....@gmail.com> #33
ro...@googlemail.com <ro...@googlemail.com> #34
se...@gmail.com <se...@gmail.com> #35
ki...@gmail.com <ki...@gmail.com> #36
fd...@gmail.com <fd...@gmail.com> #37
we worked around it by setting up the webservers under the loadbalancer (in the pool) to whitelist based of off the x-forwarded-for ip, but not ideal...
to...@gmail.com <to...@gmail.com> #38
This issue has been open for more than a year. Any updates?
ga...@gmail.com <ga...@gmail.com> #39
sc...@gmail.com <sc...@gmail.com> #40
[Deleted User] <[Deleted User]> #41
hu...@gmail.com <hu...@gmail.com> #42
[Deleted User] <[Deleted User]> #43
[Deleted User] <[Deleted User]> #44
sl...@gmail.com <sl...@gmail.com> #45
sj...@gmail.com <sj...@gmail.com> #46
jo...@gmail.com <jo...@gmail.com> #47
ma...@gmail.com <ma...@gmail.com> #48
[Deleted User] <[Deleted User]> #49
wo...@gmail.com <wo...@gmail.com> #50
to...@gmail.com <to...@gmail.com> #51
mi...@gmail.com <mi...@gmail.com> #52
br...@gmail.com <br...@gmail.com> #53
fl...@gmail.com <fl...@gmail.com> #54
fl...@gmail.com <fl...@gmail.com> #55
ni...@zxgen.net <ni...@zxgen.net> #56
se...@gmail.com <se...@gmail.com> #57
fl...@gmail.com <fl...@gmail.com> #58
ka...@gmail.com <ka...@gmail.com> #59
do...@gmail.com <do...@gmail.com> #60
ji...@gmail.com <ji...@gmail.com> #61
ji...@gmail.com <ji...@gmail.com> #62
ju...@gmail.com <ju...@gmail.com> #63
je...@gmail.com <je...@gmail.com> #64
an...@gmail.com <an...@gmail.com> #65
fl...@gmail.com <fl...@gmail.com> #66
al...@gmail.com <al...@gmail.com> #67
an...@gmail.com <an...@gmail.com> #68
fl...@gmail.com <fl...@gmail.com> #69
mi...@gmail.com <mi...@gmail.com> #70
Need IP whitelisting, don't want development servers open to public
an...@gmail.com <an...@gmail.com> #71
an...@gmail.com <an...@gmail.com> #72
mi...@gmail.com <mi...@gmail.com> #73
an...@gmail.com <an...@gmail.com> #74
jo...@gmail.com <jo...@gmail.com> #75
di...@gmail.com <di...@gmail.com> #76
[Deleted User] <[Deleted User]> #77
[Deleted User] <[Deleted User]> #78
an...@gmail.com <an...@gmail.com> #79
jo...@gmail.com <jo...@gmail.com> #80
kk...@thampy.cc <kk...@thampy.cc> #81
kk...@thampy.cc <kk...@thampy.cc> #82
[Deleted User] <[Deleted User]> #83
an...@gmail.com <an...@gmail.com> #84
an...@gmail.com <an...@gmail.com> #85
an...@gmail.com <an...@gmail.com> #86
[Deleted User] <[Deleted User]> #87
[Deleted User] <[Deleted User]> #88
an...@gmail.com <an...@gmail.com> #89
fl...@gmail.com <fl...@gmail.com> #90
ma...@gmail.com <ma...@gmail.com> #91
an...@gmail.com <an...@gmail.com> #92
fl...@gmail.com <fl...@gmail.com> #93
gl...@gmail.com <gl...@gmail.com> #94
[Deleted User] <[Deleted User]> #96
di...@gmail.com <di...@gmail.com> #97
di...@gmail.com <di...@gmail.com> #98
an...@gmail.com <an...@gmail.com> #99
pi...@gmail.com <pi...@gmail.com> #100
pa...@gmail.com <pa...@gmail.com> #101
zl...@gmail.com <zl...@gmail.com> #102
gp...@gmail.com <gp...@gmail.com> #103
sh...@gmail.com <sh...@gmail.com> #104
ma...@gmail.com <ma...@gmail.com> #105
ra...@gmail.com <ra...@gmail.com> #106
re...@gmail.com <re...@gmail.com> #107
de...@gmail.com <de...@gmail.com> #108
ca...@gmail.com <ca...@gmail.com> #109
mt...@gmail.com <mt...@gmail.com> #110
vi...@gmail.com <vi...@gmail.com> #111
ke...@gmail.com <ke...@gmail.com> #112
di...@gmail.com <di...@gmail.com> #113
fe...@gmail.com <fe...@gmail.com> #114
ma...@gmail.com <ma...@gmail.com> #115
This works with the HTTP(s) load balancer
af...@gmail.com <af...@gmail.com> #116
md...@gmail.com <md...@gmail.com> #117
On Jul 31, 2019 9:06 AM, <buganizer-system@google.com> wrote:
Replying to this email means your email address will be shared with the
team that works on this product.
*Changed*
*ha...@gmail.com <ha...@gmail.com> added
<
+1
_______________________________
*Reference Info: 35904903 Load balancing - Forwarding rules // No firewall
rules*
component: Public Trackers > Cloud Platform > Networking > Cloud Load
Balancing <
status: New
reporter: oa...@toogood.com
cc: ga...@google.com, ja...@google.com, ly...@google.com
type: Feature Request P2 S2
blocked by: 19823727 <
hotlist: Google Domain <
retention: Component default
Product: Compute Engine
Generated by Google IssueTracker notification system
You're receiving this email because you are subscribed to updates on Google
IssueTracker
<
starred.
Unsubscribe from this issue.
<
km...@gmail.com <km...@gmail.com> #118
en...@google.com <en...@google.com>
av...@gmail.com <av...@gmail.com> #119
[Deleted User] <[Deleted User]> #120
Thanks for your interest. Google Cloud Armor (
We have supported IP based filtering at the edge through Cloud Armor security policies over the past year and launched the rest of the WAF capabilities as a public beta last month (Nov).
Read more here:
WAF Announcement deep dive blog:
Cloud Armor documentation:
GKE Ingress + Cloud Armor:
Cloud Armor rules language reference:
th...@gmail.com <th...@gmail.com> #121
1. As noted in the previous post, Cloud Armor provides a way to "create firewall rules on the load balancer" for GCP external HTTP(S) load balancing. Here's another excellent link that gives you an overview of Cloud Armor:
Backend instances or endpoints behind a GCP external HTTP(S) load balancer only need to accept connections from these IP ranges, which are used by the Google Front Ends (GFEs) and health check probers that power this load balancing solution:
•
•
2. If you're interested in one of our other five load balancers, read on:
* SSL Proxy and TCP Proxy: These load balancers currently do not support Cloud Armor, but all connectivity from the load balancer to backends is sourced from the same two ranges:
•
•
3. Internal HTTP(S) load balancers currently do not support Cloud Armor, but connectivity from the load balancer to backends is sourced from the region's proxy-only subnet. You control the IP ranges used by proxy-only subnets:
4. Network TCP/UDP and internal TCP/UDP load balancers are pass-through load balancers. There's no proxy; hence:
- Your backend VMs receive packets with the IP address of the sending client, and
- You can use GCP firewall rules to limit which clients can connect "to the load balancer."
Remember that GCP has an implied deny ingress rule for all traffic to VMs, so you only have to create ingress allow firewall rules for the specific sending clients you need:
sw...@gmail.com <sw...@gmail.com> #122
d....@gmail.com <d....@gmail.com> #123
ar...@gmail.com <ar...@gmail.com> #124
+1 Please add this feature to tall kinds of load balancers.
[Deleted User] <[Deleted User]> #125
na...@gmail.com <na...@gmail.com> #126
ri...@gmail.com <ri...@gmail.com> #127
mo...@gmail.com <mo...@gmail.com> #128
Thanks for the valuable input! Here's a quick update. I'd encourage everyone to focus on what feature they need rather than on an implementation. Though "firewalls at the forwarding rule" sounds simple, it's very complex to build. Hopefully the following provides some background as to why and shows you how to accomplish what you need today.
If we focus on the pass-through GCP load balancers –
For the pass-through load balancers, the forwarding rule is merely configuration telling us how to route traffic within
If we focus on external proxy load balancers –
For those reasons and those proxy load balancers, Cloud Armor is our path forward because it lets you create a per-load-balancer configuration on our shared GFE fleet. You can configure both L4 and "next generation" (L5+) "firewall" settings. We're constantly updating the Cloud Armor features. For example, I just created a single Cloud Armor security policy blocking over 100 CIDRs. My policy references ten rules and each rule has ten CIDRs in a deny-list. I can reference this policy on multiple backend services on one or more external HTTP(S) load balancers. So my single policy denies from 100 CIDRs, and it can apply to as many external HTTP(S) load balancers as I configure. One 100 CIDR deny list, multiple load balancers.
Another helpful point when discussing IP address deny-lists is to consider the purpose of the deny list. Talking about GFE-based proxy load balancers first, you get a certain amount of automatic DDoS protection from just using Google infrastructure. For pass-through external network load balancing, note that each VM in the load balancer
Does that help? If anyone has specific use-cases, please let us know here.
sp...@gmail.com <sp...@gmail.com> #129
If I focus on our needs, we have an internal HTTP(s) load balancer which have access - through proxy subnet - to the backend instance VM on specific instance port.
We want that the load-balancer frontend IP to be accessible just for a fews VM/subnet on our shared VPC and on-premise network and not all the VM because we are working on a PCI-DSS compliant environment.
I expect a "basic" feature like load balancing could be protected even if I understand that it is shared/global.
Do you need more explanation ? Please let me know. This subject is very important for us.
te...@gmail.com <te...@gmail.com> #130
[Deleted User] <[Deleted User]> #131
Thanks!
ph...@gmail.com <ph...@gmail.com> #132
Following up on
Thank you for the specific use case – that's very clearly worded. You're correct that we currently don't offer IP address allow/deny controls for GCP internal HTTP(S) load balancers. (We do offer that kind of control using Cloud Armor for GCP external HTTP(S) load balancers.)
Here are my recommendations for regulated workloads:
-
The load balancer doesn't have to be completely unprotected even if any system in your VPC network (or on-premises network connected to your VPC network) can send packets to its IP address. To be clear, "its IP address" is the IP address of the load balancer's forwarding rule, as implemented by Google-managed Envoy proxies [*]. The load balancer's backends – your VMs – could verify that load-balanced requests they receive are legitimate. The standard way to do this is to use HTTP Authorization headers. In addition to using HTTPS transport, this model focuses on which systems are authorized to send requests to your backends, rather than looking at what IP addresses the requesting systems might use.
This option is achievable with our current internal HTTP(S) load balancer offering, but it requires that the load balancer's backends – your VMs – parse the Authorization headers that a client sends the load balancer. Our internal HTTP(S) load balancers preserve these headers when they make requests to the backends.
-
Another approach in the same spirit of access control is to use mutual TLS (mTLS, or client certificate based authentication). Instead of passing HTTP Authorization headers to the load balancer's backends, a client establishes a TLS connection with either a proxy or a backend. In terms of our current offerings, mTLS isn't currently available for our internal HTTP(S) load balancers; however, you can configure a GCP internal (pass-through) TCP load balancer to distribute traffic to backends which perform mTLS. In other words, instead of a proxy load balancer being the TLS server, you can use a pass-through load balancer whose backends are TLS servers. This is how service mesh systems like Istio are implemented in GKE Kubernetes. Services outside a cluster authenticate to services in a cluster using mTLS, and the Istio ingress "gateway" can be an internal TCP load balancer. TLS termination happens on the Envoy proxy Pods running on VMs, and those VMs can be backends of a pass-through internal TCP load balancer.
-
Whether you can use mTLS or not, it's worth considering an internal TCP load balancer for your current needs because you can control the client IP addresses which are permitted to establish TCP connections "to the load balancer" using firewall rules applicable to the load balancer's backend VMs. An internal TCP load balancer is implemented using routing in our software-defined VPC network – there's no proxy or device; it's just all our SDN. With pass-through GCP load balancers, you can control client source IP addresses using GCP firewall rules (or even hierarchical firewall policies).
An overview for internal TCP load balancing is here:
__
[*] One thing to keep in mind about the proxy systems for internal HTTP(S) load balancers: These proxies are managed by Google, and various components in our control plane can connect to them, in order to stop the proxy, start the proxy, and to do rolling updates. (Keeping the Envoy proxies current is important from a security perspective!) So there are control systems "outside of your VPC network" which can "send packets to the load balancer" according to certain interpretations.
Description
4.4 and lower:
Killed the process using DDMS, and it will be restarted later.
4.4.2:
Killed the process using DDMS, and it never come back again.