HAProxy between tcp output and logstash

Hi,

we have some custom applications that send their logs (so far) via unencrypted tcp (simple JSON).
We would like to collect those using a Logstash in front of an Elastic Cluster.
Since we have multiple Logstashes we’re also using HAProxy in front of them.
Now, we would like to encrypt the whole log traffic using TLS.
What works:

  1. sending unencrypted via HAProxy (though undesired)
  2. using HAProxy as a transparent Layer 4 (tcp) Proxy and configure TLS on the Logstashes
  3. using HAProxy as a Layer 4 (tcp) Proxy and configure TLS on HAProxy as well as on the Logstashes

The problem with solutions 2 and 3 are, that while the transport encryption works fine, we only see our HAProxy as the source of our logs in Logstash.
A solution for this (so we thought) was enabling the PROXY Protocol in HAProxy and in Logstash (which, according to documentation, only seems to understand PROXY v1).
So in our HAProxy Config, we enabled send-proxy in the according backend server line and enabled proxy_protocol => true in the according logstash tcp input config.

Logstash config:

input {
tcp {
    port => 1234
    codec => "json_lines"
    proxy_protocol => true
    ssl_enable => true
    ssl_verify => false
    ssl_key => "/etc/ssl/certs/host.key"
    ssl_cert => "/etc/ssl/certs/host.crt"
}
...

relevant HAProxy config:

frontend logstash
    mode tcp
    option tcplog
    bind *:1234
    use_backend logstash

backend logstash
    balance roundrobin
    mode tcp
    server logstash1 logstash1.example.com send-proxy
    server logstash2 logstash2.example.com send-proxy

However, in both configurations (transparent and SSL bridge), Logstash gives the following error:

[2020-01-31T14:04:21,032][ERROR][logstash.inputs.tcp      ] Error in Netty pipeline: io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

It seems to me that HAProxy encapsulates the TLS message in a Proxy Protocol header and Logstash expects it the other way around.
Is there anything we could do on HAProxy side to configure the encapsulation order?
Maybe there are other ways to make the original log source known to logstash via HAProxy while retaining the benefits of a redundant loadbalancer.

Thanks for your ideas and help!

Hello +1
we encounter the same issue.
We are trying to set an HA proxy between 2 logstashs.

We tried several configurations without success…
The ouput logstash use the lumberjack output plugin. we are using the beats input plugin for reception. The connexion is working with this setting but when adding ha proxy in between, it do not work, even putting certificat on ha proxy… An answer would be appreciate :slight_smile: