As we all know, to encrypt NFS share traffic b/w NFS client and NFS server the couple of options are used in general are Kerberos Authentication with privacy (krb5p) Or Tunnel over SSH known as port forwarding.
This article I am going to discuss about the option of Tunnel over SSH with Kubernetes pods application which mount the shard path from the NFS server. In general, Tunnel over SSH implementation is common and easy to implement for the scenarios of port forwarding between two machines NFS server and NFS server. This machines can be either windows or Linux or combination of both.
The challenging part comes into picture for the scenarios with Kubernetes cluster in place and when your NFS clients wants to mount the NFS server shared path into a Kubernetes application. The reason why it’s challenging is because Kubernetes pods does not mount the shared path directly instead it depends on cluster “Persisted Volume Claims” and this would raise a request resource to the “Persistent volume” of the cluster.
1. RHEL – Linux master as NFS server
2. RHEL – Linux node as NFS client and also maintaining running pods and providing the Kubernetes runtime environment.
A share with name “ NFS_Senstive_Data_Share” will be created in NFS server and which will be accessed from an Kubernetes pod application as an mounted path.
Before we start into implementation, would like to give quick explanation of how tunnel over SSH works with a sample in short.
ssh -fNv -c aes192-ctr -L 2049:127.0.0.1:2049 SERVICEUSER@NFSServerIP sleep 365d
The above command runs in NFS client takes any traffic directed at NFS client's local port 2049 just forwards it, first through SSHD on the remote server (NFS server), and then on to the remote server's(NFS Server) port 2049. This port forwarding can run as background process which can be running in defined long periods. The user session b/w NFS client and NFS Server will be created by the SSH Session Key pair (RSA public & private keys) and login will happen through the key files instead of typing passwords.
Hoping it would have given a basic understanding of how Tunnel over SSH port forwarding work.
Lets move into the implementation:
Configuring NFS Server and NFS client
Now the Tunnel over SSH successfully enabled, all incoming traffic to NFS client ports will be forwarded to NFS server ports through SSHD.
Few points to notice in above commands
Aes256 – forward forwarding uses AES 256 cryptography algorithm
-f - which makes the port forwarding to run in background ssh persists until you explicitly kill it with the Unix kill command.
Now let's configure the Kubernetes
Configuring Kubernetes persistent volume and claims
That’s all, now just deploy this pod and K8s PV volume files. Once deployment done, a persistent volume within K8s with Tunnel over SSH enabled mount will be created in NFS client (linux node)
Let’s verify things :
First, lets verify the PV volume mount is created in the NFS client (linux node)
[root@NFSClient ~]# mount | grep nfs
You would get an output like
localhost:/NFS_Senstive_Data_Share on /var/lib/kubelet/pods/794ea09e-0354-436d-9498-6038f352e64c/volumes/kubernetes.io~nfs/nfs-pvclaim-sensitivedata type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)
and also verify SSH Tunnel is active using below command
sudo lsof -i -n | egrep '\<ssh\>'
Second, let’s try to access the volume mount path inside Kubernetes pods.
[root@NFSServer ~]# kubectl exec -it nfs-in-a-pod -n myproductNamespace -- sh
[root@NFSServer ~]# cd /mnt
[root@NFSServer ~]# ls ------ here you can see the files inside the NFS shared folder.
That’s all, now the volume mount is created inside Kubernetes POD and the traffic between NFS Server (Linux Mode) and NFS Client (Linux node or K8s pods) are ENCRYPTED !!!
a blog on programming, design, debugging, unlearning, re-learning, choatic day & uninhabilited nights in office ;-)
Tuesday, October 22, 2019
Kubernetes NFS encrypted communication: Kubernetes pod applications (as NFS client) and Linux based machine (as NFS server) – secure traffic using Tunnel Over SSH
Tuesday, August 20, 2019
Where to store Access Token? For standalone SPA client (Angular)
Authentication implementation for standalone SPA (without
a dedicated backend server, see image below) would always have to go
through a scenario “Where to store the access token? “on successful authentication
and token exchange with identity provider.
Typically, in such scenarios, we are forced to choose either
browser
storage or browser cookie. The beauty
is both are open for vulnerable and it’s up to developers to decide which has
higher security counter measures in our application which makes less vulnerable
than the other. Period!
If we google to get an answer from experts, we will end up
in getting a mixed answer. since both options has its own pros and cons. This section about the discussion on pros and
cons of both options and the hybrid approach which I recently implemented in
one of our application.
On high level,
if we proceed with browser storage – we open window for XSS
attacks and mitigation implementation.
if we proceed with browser
cookie - we open window for CSRF attacks and mitigation implementation.
In detail,
Storing
Access token in browser storage:
Assuming our application authenticates the user from backend
AUTH REST service and gets Access token in response and stores in browser local
storage to do authorized activities.
Pros:
- With powerful Angular framework default protection of untrusting all values before sanitizing it, XSS attacks are much easier to deal with compared to XSRF.
- As like cookie, local storage information’s are NOT being carried in all requests (default behavior of browser for cookies) and local storage by default have same-origin protection.
- RBAC on UI side can be implemented without much effort since access token with permission details are still be accessible by Angular code.
- There is no-limit for Access token size (cookie has limit of ONLY 4KB), it may be problematic if you have many claims and user permission are attached to the token.
Cons:
- In-case XSS attack happened, hacker can steal the token and do un-authorized activities using valid access token impersonating user.
- Extra effort is might required for developer to implement HTTP interceptor for adding bearer token in HTTP requests.
Storing
Access token in browser cookie
Assuming our application authenticates the user from backend
AUTH REST service and gets Access token in response and stores in browser cookie
(as HTTP only cookie) to do authorized activities.
Pros:
- As it’s HTTP only cookie, XSS attacks cannot succeed injecting script to steal token. Gives good prevention for XSS attacks stealing access token
- No extra effort required to pass access token as bearer in each request. since as default browser behavior cookies will be passed in each request.
Cons:
- Extra effort needs to be taken to prevent CSRF attacks. Though Same Site cookie and Same Origin headers checking gives CSRF prevention, still OWSAP standards recommends having this only as secondary defense. NOT recommending considering as primary defense since it’s still can be bypassed by section https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1
- Extra effort to implement XSRF /Anti forgery token implementation and validation. (If backend services are still vulnerable for Form action requests). and, need to have HTTP interceptor in Angular client to add XSRF token in request header.
- Max cookie size supported is 4 KB, it may be problematic if you have many claims and user permission are attached to the token.
- As a default browser behavior access token cookie are being carried automatically in all requests, this always an open risk if any misconfiguration in allowed origins.
- XSS attack vulnerability can be used still to defeat all CSRF mitigation techniques available.
·
Storing
Access token in Hybrid approach:
For scenario like, Oauth2.0 flow integration for SPA client (either
“Implicit grant flow” or Auth code with PKCE extension flow”) after user authentication
and token exchange, the respective identity providers (ex: identityserver 4,
Azure AD B2C, ForgeRock..etc) would return access token as HTTP response , it
won’t set access token as response header as cookie. This is the default behavior
of all identity providers for public clients “implicit flow” or “Auth code + PKCE
flow”, since Access token can NOT be in cookie in server side, enabling “Same-site”
or “HTTP-Only” properties are not possible. These properties can be set only
from server side.
For the scenarios like above, the only way to store access
token is either browser local storage or session storage. But if we store
access token and your application is vulnerable for an XSS attack then we are
at risk of hackers would steal the token from local storage and impersonate that
valid user permissions.
Considering above mentioned possible threats. I would recommend
having a Hybrid approach for better protection from XSS and XSRF attacks.
“Continue storing access token in local storage but as a
secondary protection or defense in depth protection have session fingerprint
check. This session fingerprint should be stored as HTTP Only cookie which XSS
could not tamper it. While validating
the access token in Authorization header, also validate the session fingerprint
HTTP only cookie. If both Access token and session fingerprint HTTP only cookie
are valid then pass the requests as valid, if HTTP only cookie is missing then
make the request invalid and return Unauthorized.
In this way, even if XSS attack happened, hacker stolen token
from local storage but still hacker can not succeed in doing unauthorized activities.
since the secondary defense of checking referenced HTTP only auth cookie hacker
would not get in XSS attacks. we are
much protected now!
I would recommend the above Hybrid approach only for the scenarios
you have only having a choice of storing access token in local storage or session
storage.
But, in case your application has the possibilities of setting
access token in cookie at server side after success full authentication. with “HTTP
Only”,” Same-site=Lax”,” Secure Cookie” are enabled still I would recommend
storing access token in cookie with below open risks.
- As per OWSAP standards, “same-site” cookie and “same origin/header” checks are only considered as secondary defense. XSRF token-based mitigation is to be recommend as “primary defense” which again requires developer efforts in each module to implement XSRF token in HTTP interceptor. or as alternative, you are giving proper justification to live with open vulnerability of having only “secondary defense” as CSRF protection.
- If none of our GET api are not a "State changing requests", developer not violating section : https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1
- if we don’t foresee, our token size won’t reach 4KB in future. Current size is ~2KB.
- If Samesite=strict applied, it would impact the application behavior, since it would block cookie passed in top-level navigation requests too.
- If None of our backend services supports [FromQuery] and [FromForm] data binding.
- Teams are justified to live with “Cons” of browser cookie explained in above section.
Conclusion
The debate of choosing whether browser storage or browser cookie would still continue until unless
our SPA design has dedicated backend server which would store access token in
server in http context and NOT at all exposing access token to browser.
Until then, it’s up to developers to decide in our
application which browser storage mechanism has more multi layered (primary and
depth in deep defense) protection than other, which makes it less vulnerable
other. The decision behind continuing with browser storage explained
above and the possibilities of storing in browser cookie
with open risks mentioned above.
Subscribe to:
Posts (Atom)