Home

Protecting the
Important Bits

News

HTTP/2 Load Balancer

It's that time again... We are very excited to announce the latest release of Ishlangu Load Balancer ADC to the world.

30% Increase in Throughput

One of our main goals for this release was to improve the performance and configuration of SSL based communication. This improvement, coupled with other improvements within the system has resulted in increased throughput of up to 30%. Every one of our clients are advised to upgrade to version 3.1 as you will all see a dramatic increase in performance. This is especially true for clients that take advantage of Ishlangu's SSL Offloading capabilities.

Simplified Certificate Management

We have simplified the process of managing certificates on an Ishlangu unit. Please read the release notes for more information including an 'upgrade' guide for existing clients.

HTTP/2 Load Balancing

We are also very excited to offer support for HTTP/2. You can now configure your proxies to support HTTP/2 clients such as Chrome, Safari, Firefox, and Microsoft Edge. Clients that do not support HTTP/2 will communicate to the proxy in HTTP/1.x. Please note that our HTTP/2 support is only available to SSL enabled HTTP based proxies.

What are the key features of HTTP/2

At a high level, HTTP/2:

  • is binary, instead of textual
  • is fully multiplexed, instead of ordered and blocking
  • can therefore use one connection for parallelism
  • uses header compression to reduce overhead
  • allows servers to “push” responses proactively into client caches

Why do we need header compression?

Patrick McManus from Mozilla showed this vividly by calculating the effect of headers for an average page load. If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client. This is because of TCP’s Slow Start mechanism, which paces packets out on new connections based on how many packets have been acknowledged – effectively limiting the number of packets that can be sent for the first few round trips. In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip – perhaps even one packet. This overhead is considerable, especially when you consider the impact upon mobile clients, which typically see round-trip latency of several hundred milliseconds, even under good conditions.

Special Thank You

A special "Thank You" goes out to the Netty Project, for the amazing work and support they provide. Ishlangu would not be what it is today without you.

Lastest news

Load Balance with TCP Fast Open

HTTP/2 Load Balancer

Mitigate Microsoft's MS15-034 DoS

What we do

  • Load Balancer
  • Firewall
  • Web Acceleration
  • Web Security
  • Application Scalability
  • Application Delivery

Get in touch

  • Elgar Drive
    Witham Essex
    CM8 1QD
    United Kingdom
  • +44 203 397 2168
  • shakatechs