Squid Cache Proxy (a beta how to)

So, in my attempt to unravel the arcane secrets of iptables; as to harness the power of the N2 by using it as a gateway, I began to ask the ask myself the very question no reluctant student has ever thought before to this very day:

Purrrhaps I can cheat somehow?

This, and that I found a unutilized 32GB micro SD card, I began to think about what I could do with my limit knowledge thus far.

Ah, yes… a good ol’ Squid Cache of course!
It is a proxy that stores whatever sites you visit, with some minor caveats.

First of all, it’s the question of https, however it is possible to use the squid function SSL Bump.

Squid is quite heavy on the disc usage, but a microsd can be quite cheap these days (~€9) and well, lets see how long the poor thing last. Of course, you can use any storage you like, beware of wear though.

In the meantime, I get accelerated web-content, saves bandwidth/data traffic and, if you have a VPN configured, you get that traffic over the VPN. So… sort of cheating, but I will have the satisfaction of a full-blown gateway on the N2. Hopefully the lovely devs and the shoulders of titans they stand upon; that we owe so much gratitude, will make it possible to run scripts (slightly edited) such as wg-up etc.

To start, you need Entware enabled. (see the pinned howto)

(here, I might add, it’s quite nimble to have winscp installed, as you can browse your way to configs, transfer files for backup and edit files with ease with any editor, say notepad++ or the like)

So we begin this venture by:

opkg update
opkg install squid

Then we look at the config:
nano /storage/.opt/etc/squid/squid.conf

This is my config by the way:

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 0.0.0.1-0.255.255.255	# RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8		# RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10		# RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 	# RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12		# RFC 1918 local private network (LAN)

#change to match your network
acl localnet src 192.168.1.0/255.255.255.0			# RFC 1918 local private network (LAN)
acl localnet src fc00::/7       	# RFC 4193 local private network range
acl localnet src fe80::/10      	# RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
#http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access deny to_localhost
# And finally deny all other access to this proxy
http_access deny all
#
# Add any of your own refresh_pattern entries above these.
#
#refresh_pattern ^ftp:		1440	20%	10080
#refresh_pattern ^gopher:	1440	0%	1440
#refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
#refresh_pattern .		0	20%	4320
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|webm|gifv|mkv)$ 43200 90% 432000 override-expire ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-store ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern . 0 40% 40320
max_stale 2 weeks
refresh_pattern . 60 50% 14400 store-stale
# Specific rule for youtube caching.
#refresh_pattern -i youtube.com/.* 10080 90% 43200
refresh_pattern (/cgi-bin/|\?) 0 0% 0

max_filedescriptors 3200

#Change to match your network
dns_nameservers 192.168.1.1

#Change to your path of choice & amount of storage 20000 = 20GB
cache_dir aufs /var/media/32GBMicroSD/SquidCache 20000 16 256
cache_mem 48 MB
coredump_dir /var/media/32GBMicroSD/SquidCache

# Squid user
cache_effective_user nobody


#With SSL Bump:
http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/storage/.opt/etc/squid/ssl/squid-ca-cert-key.pem

# certificate generation program
sslcrtd_program /storage/.opt/lib/squid/security_file_certgen -s /var/media/32GBMicroSD/SquidCache/squid_ssldb -M 4MB


#SSL BUMP

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

#acl dontcache		dst		 "/storage/.opt/etc/squid/dontcache_exclude_domains.conf"
acl ssl_exclude_domains ssl::server_name "/storage/.opt/etc/squid/ssl_exclude_domains.conf"
acl ssl_exclude_ips     dst              "/storage/.opt/etc/squid/ssl_exclude_ips.conf"

ssl_bump splice localhost
ssl_bump peek step1 all
ssl_bump splice ssl_exclude_domains
ssl_bump splice ssl_exclude_ips
ssl_bump stare step2 all
ssl_bump bump all

#
# Logs, best to use only for debugging as they can become very large
#

access_log none #daemon:/var/media/32GBMicroSD/SquidCache/squid_access.log
cache_log /tmp/squid_cache.log   #/dev/null

Which you can use, as is, BUT you have to change some paths, the DNS server and acl ip value in the top. Then there is the question of ram. Rule: Per 1G hard disk space 14MB RAM are required.

Example:

 cache_dir ufs /var/cache/squid/ 20000 16 256
 cache_mem 48

Default is cache_mem 8MB
So (48MB * 3)+(14MB * 20) = 424MB
This is for “hot object” that’s being used a lot.

The config is somewhat extensive, but there are quite good documentation on the vast internetz.

Read up if you wish, on values such as: refresh_pattern (mine are set to cache a lot) and so forth, what you want to cache. There is a youtube rule, that you can uncomment, that is pretty handy if you watch the same funny cats video before bedtime. Not judging.

The references links also has some example and deeper explanations than this howto.

What I will focus on, is to get https sites accessible and be able to cache those.

First create a directory say:
mkdir -p /storage/.opt/etc/squid/ssl

Set owner to nobody:
chown -R nobody /storage/.opt/etc/squid/ssl

Go into the directory:
cd /storage/.opt/etc/squid/ssl

Then we need to generate some certificate files:

openssl req -new -newkey rsa:2048 -sha256 -days 365 -nodes -x509 -extensions v3_ca -keyout squid-ca-key.pem -out squid-ca-cert.pem

Generate your to be browser imported certificate:
openssl x509 -in squid-ca-cert.pem -outform DER -out squid-ca-cert.der
Then combine these:
cat squid-ca-cert.pem squid-ca-key.pem >> squid-ca-cert-key.pem

cd ..

Make some .conf in /storage/.opt/etc/squid which you can add https sites that cause troubles:

touch dontcache_exclude_domains.conf
touch ssl_exclude_domains.conf
touch ssl_exclude_ips.conf

ssl_exclude_domains.conf

.lemmy.ml
.discord.com
.gateway.discord.gg
.ssl.gstatic.com
.discordapp.com

Make the Squid Cache directory (match your configs path!)
mkdir -p /your/path/SquidCache

change the ownership of the cache dir:
chown -R nobody /your/path/SquidCache

Check if squid throws a fit over your config:
squid -k parse

Create the basics for the squid cache:
squid -z

Open port 3128 and save the custom iptables:

iptables -A INPUT -m state --state NEW,ESTABLISHED,RELATED -m tcp -p tcp --dport 3128 -j ACCEPT
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 3128 -j ACCEPT
iptables-save >/storage/.config/iptables/rules.v4

Stop squid:
/storage/.opt/etc/init.d/S22squid stop
Start squid:
/storage/.opt/etc/init.d/S22squid start

Copy the .der certificate file you created, to the computer which will use the proxy, then you can import the certificate by looking in the settings:

Chrome:
Settings / Show advanced settings / HTTPS/SSL / Manage certificates; then import the .der into Trusted root certification authorities

Firefox:
Options / Privacy & Security / Certificates / View Certificates / Authorities - Import; and import the .der.

Configure your browser to use your proxy:
Example: 192.168.1.3 and port 3128 and check ‘use this proxy for FTP/SSL’

Android browers, seem to be another story in it self.
Leave this page open, if you need documentation.
Reboot CoreELEC
Other than that, you should be set to go, besides revisiting the tiny shore of making a new certificate in ‘-days 365’ after you generated your certificates.

Test your proxy: https://ipleak.net

You can get a estimate by how much acceleration your webcontent will be by testing (change path to match):

Write speed test:
echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/zero of=/var/media/32GBMicroSD/test.speed bs=1M count=1024

Read test:
echo 3 > /proc/sys/vm/drop_caches && dd if=/var/media/32GBMicroSD/test.speed of=/dev/null bs=1M

rm /var/media/32GBMicroSD/test.speed

Test transmition rate, iperf is not all that accurate, but…

opkg install iperf

Install iperf on the computer using the proxy such as:
https://iperf.fr/iperf-download.php

Start the server on windows by opening a powershell in the dir where you extracted iperf, then:

./iperf -s

And on CoreELEC

iperf -c 192.168.1.2 (change to match ip)

References:
https://elatov.github.io/2019/01/using-squid-to-proxy-ssl-sites/

https://advsoft.info/articles/setting_up_HTTPS_inspection_in_squid.php

https://www.pks.mpg.de/~mueller/docs/suse10.2/html/opensuse-manual_en/manual/sec.squid.configfile.html

Sorry for the grammar edit, but a man got to sleep at nights.
Also, I would be grateful of any user could report in, and the beta be closed. It’s somewhat of lengthy how to after all, but worth it.

2 Likes

Excuse me, it seems that you only write for expert people. What is Squid Cache for?

I reply to myself, I found this "Squid: Optimizing Web Delivery - Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages .

The next question is: is it really useful to use CoreELEC as a Squid Cache home server? What is going to improve?

1 Like

The webcontent you are trying to access from your browser will be cached in the squid proxy, the next time you are accessing the same content, it will pull from the squidproxy instead.

If there is new content, squid will get only that content, and cache this content for next time.

I.e, the webaccess will be basically as fast as your coreelec-host and what network you have between you and the coreelec-host

If you have openvpn or wireguard configured, your content will travel through the VPN. check with the https://ipleak.net url to confirm.

Well, I try to make it as step by step as possible, are there anything unclear, ask and I will answer.

Please feel free to ask anything and I will try to answer.

Squid is also a good solution to blacklist websites and prevent your kids from visiting websites not suitable for them. :wink:

Thank you! For many years I have been using the adguard DNS on the router without problems, for home use it seems to me the easiest solution.

There are preconfigured ad-filtering dns-servers you can use otherwise, either by this howto or by looking for the domains here.

Usage report;

I have been tossed in a situation were I have to rely on my phone tethering for the internetz.

Oh jeez, I’m glad I set this up. Now the option of not going bananas is a fact, and surviving is possible until my isp get it’s what ever what now together. It’s a very noticeable difference to say the least. Can only imagine how useful this is in locations were internetz infrastructure “is what it is”.

In fact, I’m contemplating to set up a docker for this, easier to measure what is going on and so forth, there are a couple of options both squids own and some others.

Have squid and pinole running on my media server working well.

Shoog

1 Like

Usage report 2:

Migrated to another computer (painless, cachedir & all) and while I was at it, installed squidanalyzer and let it do it’s thing with a 320mb logfile. It should have put out nice piecharts, but I get a ‘permission denied’. Anyhow, going by the numbers:

tl;dr: The cached hits are about 10%.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.