http2 protocol with https allegedly provides performance gains in addition to security. Many platforms require substantial upgrades to make it work. Has anyone done it? Do you think the benefits are worthwhile?
I use it, but just because is already set up by serverpilot configuration, I really don't know if brings any vantage :-[
I think it's mostly worthwhile for sites that would otherwise be stupid slow (think your Twitters, Facebooks, and other 2MB+ behemoths) but even on proper websites the benefits are indeed not negligible. See, e.g., https://www.troyhunt.com/i-wanna-go-fast-https-massive-speed-advantage/
I am using since end of last year. Since you are on your own server I'd say, go for it. It has been so far so good to me.
I think you need to add that up yourself.
Ahrasis I believe there is proxy option too
https://geekflare.com/free-cdn-list/
https://geekflare.com/http2-implementation-apache-nginx/
Not much to implement it. It's available in Nginx.
Thanks for the thoughts guys. It is a done deal. It wasn't horrible to implement, but not exactly easy either. The variable is OS. On a Centos 6 machine a few upgrades must be made.
Nginx supports out of the box since XXX version. But it had to be compiled for C6, which ships with openssl 1.0.1e, therefore so was nginx. So it's compile a modern openssl version. Well...a few dependencies are missing. The guides I found didn't cover them. So a shotgun approach was in order. Finally got openssl compiled. Then it was compile nginx. A few hiccups later got that done. Configure http2 and off to the races.
Many publications fail to mention all that. Lol. Performance is killer too. Totally worth it.
as noob in the forum administration, from those listed in the link the cdn Incapsula seems ok as security option along the oportunity to workout for me the http2 protocol ... maybe if I had some minimal seo knowledge this would be unnecessary ...
Glad you solve it
@badmonkey ;)
Cannot talk about performances and differences, since I had it installed since the first day I switched to a vps.
Congrats
@badmonkey. That's why I never continue with Centos (and original Debian) though I'd prefer it if I have more time to master them. They never have easy-to-use repo for most of the latest thing.
At least with Ubuntu I can use latest updated and stable version via various ppa's in an easy way without compiling things myself.
By the way, since you are compiling your own custom nginx, do check brotli. It is claimed that it is far more better than gzip. I don't have a good ppa to use, so I will wait, but you can definitely try it.
Thanks for the brotli tip. I've looked it over but never thought about using it in nginx. Hhmmm.... 8)
On a tangent that's actually a nice thing about compiling your own. There can be a custom setup in minutes!
And if you don't mind, do share your code for compiling your custom nginx (just the nginx part). May be I'll try playing it sometimes. ;)
It's something like this, with a contingency plan at the end should something go wrong. ;)
So you will have a configuration backup, just in case.
cp -R /etc/nginx /etc/nginx_bak
A local backup isn't a terrible idea either.
Be sure to find your openssl version, and replace the version in the configure command below. Use this to find your version (must be 1.0.2 or greater for http2 support)
openssl version
mkdir /home/projects
cd /home/projects
wget http://nginx.org/download/nginx-1.13.7.tar.gz
tar zxf nginx-1.13.7.tar.gz
cd nginx-1.13.7
./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-openssl=/usr/src/openssl-1.0.2a --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
make
cd /usr/sbin
cp nginx nginx.dist
cp /home/projects/nginx-1.13.7/objs/nginx /usr/sbin/nginx.custom
service nginx stop ; rm /usr/sbin/nginx ; ln -s nginx.custom nginx ; service nginx start
Woops!!!!! I need to go back to a working version for a production environment!!
service nginx stop ; rm /usr/sbin/nginx ; ln -s nginx.dist nginx ; service nginx start
You're probably noticing there is a newer nginx version, 1.13.8. That version is super slow with the geoip module. Should be fine if you aren't using it. Should you forget to add a module or wish to recompile for whatever reason, a
make clean
is recommended prior to the ./configure command. ;)
@ahrasis do you think brotli will make a significant difference over gzip for dynamic content? Some suggest it's greatest benefit is serving static content. Still, you've piqued my curiosity.... :D
Also, how would you (or anyone else here) feel about the nginx pagespeed module? Which would be better for overall performance? Or both?
Well that was interesting. Using this compression tool:
http://www.visiospark.com/gzip-compression-test/
Here are the results with gzip level 6:
Original Size: | 51.3 KB |
Gzipped / Compressed Size: | 9.3 KB |
Compression Percentage: | 81.87% of page is compressed |
Status Code: | 200 |
Request Time: | 1.03636s |
Compression Time: | 0.00184607505798s |
Content Type: | text/html; charset=UTF-8 |
And brotli level 6:
Original Size: | 51.8 KB |
Gzipped / Compressed Size: | 9.6 KB |
Compression Percentage: | 81.47% of page is compressed |
Status Code: | 200 |
Request Time: | 0.98672s |
Compression Time: | 0.00286102294922s |
Content Type: | text/html; charset=UTF-8 |
Yes, those are in the correct order. Huh.
Reducing the compression level to 4 did yield a faster compression time than gzip with a slightly better compression. In the interest of full disclosure, these were one-time tests. They're hardly scientific but do demonstrate possible capabilities.
Original Size: | 51.4 KB |
Gzipped / Compressed Size: | 9.3 KB |
Compression Percentage: | 81.91% of page is compressed |
Status Code: | 200 |
Request Time: | 0.98454s |
Compression Time: | 0.00139307975769s |
Content Type: | text/html; charset=UTF-8 |
In all honesty I
think a forum with a handful of tiny images is too small in rendered page size to benefit from brotli over gzip. But hey, it was fun trying. ;D Clearly they're both better than no compression!!
Well I haven't tried it yet so I didn't know. From what I read, brotli do not compressed a compressed file and compressing a compressed file like png will only create a bigger file than original. And from claims, they do work better on statics but I can't be sure how it works or will it work.
How did you set your nginx.conf for brotli anyway?
Here is the conf. Also in the interest of full disclosure, gzip settings are included. ;)
##
# Gzip Settings
##
gzip on;
gzip_static on;
gzip_min_length 2048;
gzip_disable "msie6";
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js image/gif image/jpeg image/jpg;
gzip_buffers 16 32k;
#broccoli settings
brotli on;
brotli_comp_level 4;
brotli_buffers 16 8k;
brotli_min_length 2048;
brotli_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
brotli_types *;
I enabled HTTP2 on my Debian Stretch VPN no problem two days ago after posting my earlier message in this thread. ;)
My private FreshRSS instance actually seems to load minutely faster, though I'm not sure if it's a placebo effect. Besides the first load it should be all cached anyway, after all.
Oh yes. Debian Stretch should have http2 by default in its supported nginx version. But I am not so sure about brotli.
By the way,
@badmonkey this "brotli_types *;" is what I was talking about compression on compressed files via brotli module. Try to take that out.
Ah yes, missed that. Thanks. Here are the results:
URL: | https://bbs.zuwharrie.com |
Original Size: | 51.2 KB |
Gzipped / Compressed Size: | 9.3 KB |
Compression Percentage: | 81.84% of page is compressed |
Status Code: | 200 |
Request Time: | 1.22118s |
Compression Time: | 0.00371408462524s |
Content Type: | text/html; charset=UTF-8 |
Kinda of not much of a change or to compare with gzip or may be gzip shouldn't be on to test it. Ok, I am not sure the right way on how to implement it as all readings only mention it as per your settings. Sorry
@badmonkey. May be once I tried I can figure it out.
Nothing to be sorry about. We are learning things. That's time well spent. ;) That said, I may disable it in the short term, as the page does feel slightly more sluggish. We can continue researching configuration while applying periodic tests. It's only a couple commands away!
I'm already wondering if config order matters....
Oh, actually I use Apache. :D But the older Debian would've also required compiling Apache yourself for that.
I've played around with Nginx as well, but I didn't really see a reason to switch. I suppose for non-private use at scale speed might become an issue.
Also, of course you could always consider running your Nginx (or whatever) in a Docker container so you don't have to bother compiling things yourself. I haven't investigated if there's a sensible way to do things like auto-updating (for security) in that case, however.
More messing around. Here is a result using brotli level 1.
Original Size: | 51.3 KB |
Gzipped / Compressed Size: | 9.3 KB |
Compression Percentage: | 81.87% of page is compressed |
Status Code: | 200 |
Request Time: | 0.984s |
Compression Time: | 0.00141096115112s |
Content Type: | text/html; charset=UTF-8 |
And gzip level 1.
Original Size: | 51.4 KB |
Gzipped / Compressed Size: | 9.4 KB |
Compression Percentage: | 81.71% of page is compressed |
Status Code: | 200 |
Request Time: | 1.15723s |
Compression Time: | 0.00146794319153s |
Content Type: | text/html; charset=UTF-8 |
At least in one instance level 1 gzip actually provided ever so slightly better compression than level 6. Lowering brotli compression level decreased compression time. Albeit not a great reduction in the numbers themselves, perceived snappiness of the page is certainly better. Decompression time plays into that as well. That said, I'm not seeing a reason to run either compression method greater than level 1 for on the fly (streaming) compression, nor am I seeing a huge difference in performance at that level. The edge may indeed belong to brotli!
I have updated my nginx and install brotli and its module but I haven't set brotli in my nginx.conf yet. Still figuring the right way to do it.
Your page was 0.00005698204s faster and 0.1KB smaller due to brotli. They are the same.
On a busy server, savings are savings. Even if fractional. lol!
There are no claims that brotli IS the winner, only that it MAY be a winner with proper use and configuration. Streaming compression is not it's intended strong suit. Here, it's demonstrated it can actually hold it's own with streaming gzip compression. Frankly, that was unexpected. An interesting comparison would involve static file compression to take full advantage of brotli's intended use. I have no datapoints to make those comparisons, but speculate if brotli's documentation is even remotely correct it "may" grab a lead over gzip. Likewise, until or unless we have said datapoints, we cannot say they are the same, either.
Basically, upgrading to the latest stable version for your softwares like Nginx and MariaDB do help. Again my assistance is normally PPA but you can compile them on your own server to see it.
Edited: I find this as interesting - https://www.opencpu.org/posts/brotli-benchmarks/