Saturday, December 19, 2020

Docker Security - Learn with exploiting the weakness

In our docker environment, the moment we started thinking about protecting docker with right secure practices then the first buzz word would come in all our mind is "isolation". 

 Yes, In docker security the real buzz word is "isolation". The more you isolate docker container runtime from a docker Host, the more you isolate one docker container from another container then the security is almost there. To bring these "isolation", the docker as a framework by default supports some of isolation practices such as 
  •   Docker Namespace 
  •   Cgroups 
  •   Kernel capabilities. 
Docker namespace brings much of isolation by providing namespace separation for "process" ," mount", "network stack", etc., etc. For example with docker process namespace, the isolation is provided between the process in the container and the process in the host. The process in the host will have a different process ID, and the same process inside the container will have a different process ID. The processes in running in a host cannot be accessed inside the container and vice versa. This way docker provides isolation of one container is not disturbing other container and also not disturbing the host. 

CGroups are another key component supports isolation in docker. They implement resource accounting and limiting. They provide many useful metrics, but they also help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. 

If we take Kernel capabilities, the docker by default restricts the set the kernel capabilities within the container. For example, the root user in the docker Host will NOT have all the capabilities inside the docker container.

Along with aforementioned isolation practices, we will look at some of the docker secure practices which docker and Linux Kernel supports.

Below are the list of docker secure practices, we will discuss in this article. Also, according to me if we want to protect something we also should be knowing how to break it too. Let's learning with exploiting each secure practices weakness in the docker environment.

Let's start with docker architecture to understand why we do we say "isolation" is important in docker security.  If we look at the below diagram , you could imagine that how the kernel is positioned  in docker architecture while comparing traditional VM architecture . In VM architecture the individual VM process will have it is own dedicated Kernel , but when it comes to docker architecture which is not the case. Each containers process will share the same host Kernel across the cluster.

This is one of the reason why the "isolation" is important in docker security terms.  Let's take an example if one containers is damaged with attacker arbitrary code then eventually there is a possibility
of the vulnerability breakout from the container to host kernel. As kernel is sharing across the container and docker engine is positioned above the host kernel the attack surface will be extended to breaking out to the other containers in the cluster also. This is the risk the docker architecture poses in terms of sharing host kernel across container processes. 

Rootless containers

Running your containers as a "Rootless container".  It means running the entire container runtime as well as the containers without the root privileges.

In a normal scenario when a docker engine spins a new container process the default privilege
the container will be running is "root" privilege, though the default docker isolation
practices limits the root user capabilities within the container but the still the container
will be running in a as root user. In any case, if the container runtime is processed it could maximum 
impact to the container and also if the vulnerability breakout the vulnerability will have access to docker engine and host machine kernels.

Also, if we really look into the need of running container in ROOT mode. Absolutely 90 % there is NO need to  run container in root mode. 

Below are the potential threats of running container in ROOT mode

Within the container

A compromised container runtime:  With root context can perform any action inside the container including installing new software editing files, mount file system, modify permission etc.,

Outside the container

In a compromised container, the vulnerability could:
  •      Breakout the container and escalate permission to Host.
  •      Breakout the container to damage other container 
  •      Breakout to docker engine and can make requests to docker API server.

How to exploit the root containers 

Here I will show you how the container running with root mode can be exploited in simple ways.

I've used Katacoda as a testing environment.

As a first step to exploit, you can verify the container running mode as shown below.
In the below container I verified that container is running mode by running "whoami" command
inside the container.

Privilege escalation to host machine

In below steps, I've shown how privilege escalation happens from the docker container to docker host.
To simualte I've mounted the host machine filesystem as a volume into the container, then I run
the command "cat /host/etc/shadow" . The output is listing the users details of host machine.

Small DoS attach within the container

In below step, I'll show a simple DoS attack exploitation within the docker container.
Here the container is running in a root user mode, hence it has privilege to install any software's within the container. Taking advantage of that, I install the debian package called "Stress", then using "stress" package I make heavy load to container memory and thereby bringdown the container to "OOMKilled" mode. Successfully made the DoS exploit.

How to run as "Rootless container"

Here the some of the basic steps to consider running your container as "rootless container"

1. Update your YAML file (if using K8s) and the securit context section to 
       "runAsNonRoot" : true
   "runAsUser" : 1000
2. Add a new non-root user in your docker file

RUN groupadd --gid 1000 NONROOTUser && useradd --uid 1000 --gid 1000 --home-dir /usr/share/NONROOTUser --no-create-home NONROOTUser

3. Incase your container port is running in priviliged port any thing below 1024 for example: port 80, please modify
to run in unpriviliged port (anything above 1024), for example : port 5000.

Rootless Docker Engine

Running docker engine or daemon in a NON-ROOT user context.

In the above section we saw "rootless container", here the other secure practice to run  your docker engine /host itself in a rootless mode.

Docker recently introduced "rootless docker engine" as part Docker version 19.03. Docker recommends
to run your container as rootless mode, however the this feature is still preview mode and yet to 
be used by many peoples.

With below command, you can check your docker engine is running in root mode or rootless mode.

Docker Seccomp Profile

Secure computing mode (seccomp) is a Linux kernel feature.

  • Seccomp acts like a firewall for systems (syscalls) from container to host kernel.
  • Sample list well known syscalls: MKDIR  <> , REBOOT <>, MOUNT <>,KILL <>, WRITE <>.
  • Docker default Seccomp profile disables 44 dangerous system calls, out of 313 available in 64-bit Linux systems
  • As per Docker incident CVE’s list,  most of docker incidents are due to privileged Syscalls.
  • Docker default  Seccomp profile provided whitelisted Syscalls most of time NOT necessary for our product needs.It is recommended to have product specific custom seccomp profile by whitelisting only Syscalls used by our container.

How to check Container Seccomp Profile

We can verify your container runtime is enabled with default seccomp profile protection or not. Just go inside your container terminal mode and run the below command grep Seccomp /proc/$$/status ( as shown below)

Seccomp value 2 means it is ENABLED
Seccomp value 0 meants it is NOT enabled

Docker Limited Kernel capabilities

By default, Docker starts containers with a restricted set of capabilities. This provides
a greater security within the container environment.

It means though your containers process is running with a root mode, the Kernel capabilities
within the container are limited. Docker will allow only a limited capabilities within the
container which user process can execute. However, this default protection from docker 
can be overridden if you run your container in a "privileged" mode.

To understand better. If you log into your Linux host machine as a Root user then you will
have the below Linux kernel capabilities will be allowed.

But the same root user enters into docker container the most above kernel capabilities will 
be dropped and only below restricted list of capabilities will be allowed. 



Privileged container can do almost everything that the host can do. 
The --privileged flag gives all capabilities to the container,  and it also lifts all the limitations enforced by the device cgroup controller. 

Using below command you can verify whether your command is running in PRIVILEGED Mode or normal mode.

If the command returns TRUE, it means container is running in a PRIVILEGED mode.

Run container with limited or NO Kernel capabilities

Absolutely, in normal scenarios most of the Microservices running in a container does NOT need 
all Kernel capabilities provided by Docker.

Hence, the best practice is DROP all capabilities and add only required capabilities.

This can be done from Kubernetes docker Yaml file security context configuration. In your security
context either DROP all capabilities. Example
 SecurityContext => Capabilities => drop : ALL
Or add only the require capabilities. Example

 SecurityContext => Capabilities => add : ["NET_ADMIN", "SYS_TIME"]

Docker SE Linux Protection

Docker SELinux controls access to processes by Type and Level to the containers. Docker offers two forms of SELinux protection: type enforcement and multi-category security (MCS) separation.

  • SELinux is a LABELING system
  • Every process has a LABEL. Every File, Directory and System object has a LABEL
  • SE Linux Policy rules controls access between labelled processes and labelled objects.
!! To enable SE Linux in container, your Linux host machine must have SE Linux enabled and running !!

Docker UNIX Socket (/var/run/docker. Sock) usage

There are approaches followed by developer to achieve container management related functionalities
they will mount the docker UNIT socket inside the container and using the docker socket they
will do achieve the container management functionalities implementations such as for  collect logs from all containers, create a container, stop container...etc


It is more dangerous combination of Root context, container privileged mode and UNIX socket mounted.

Below is sample scenario which mounts docker UNIX socket inside container for log management of all the containers running by the docker engine.

Docker Network security 

Be cautious on how you expose the services insider the container to outside the cluster.

  • Do NOT expose the container with External IP ( if there is NO explicit need to run in external IP)
  • When there is a need to expose with External IP ensure that the inbound connection are encrypted and listening in 443 port.
  • Always try to expose your services only with Cluster IP mode.
  • If there is a need to expose with Node Port, ensure that the inbound connection are encrypted and listening in 443 port

Ingress and Egress rules:

Control traffic to your services with Ingress and Egress network policies. 
  • With strict ingress rules supported by Kubernetes you can restrict the inbound connections to your containers.
  • With strict egress supported by Kubernetes you can restrict the outbound connections from your connection to other network.

Other Docker Security Practices

  • Volume mount – as read only
  • Ensure SSHD does not run within the containers
  • Ensure Linux host network interface not shared with containers.
  • Having no limit on container memory usage can lead to issues where one container can easily make the whole system unstable incase DoS attack happened
  • Don't mount system relevant volumes (e.g. /etc, /dev, ...) of the underlying host into the container instance to prevent that an attacker can compromise the entire system and not just the container instance.
  • Ensure Docker daemon available remotely over a TCP port. Ensure TLS authentication.
  • Consider read-only filesystem for the containers.
  • Leverage secrets store/wallets instead of environment variables for sensitive data storage inside docker container.

Saturday, December 12, 2020

Custom HTTP interceptor hook to Intercept Iframe Window HTTP requests from Parent Window in Angular

 As we all know, angular provides a default HTTP Interceptor as part of angular HTTP module. We can use this interceptor only to intercept the HTTP requests from the current window object. 

Recently I had a requirement to intercept the HTTP requests triggered from Iframe window object and add the intercept values from parent object. We tried with default angular HTTP interceptor object, as I initially expected it did not work. Because the default HTTP interceptor does not provide provision to intercept the iframe window object. 

Hence, I've written a quick hook using JavaScript which will intercept all the HTTPRequests triggered from the Iframe object.  

This will be a hook within the XMLHTTPRequest open object and the hook will stay permanent in the iframe object until the iframe window object itself completely destroyed. 

Here is the custom hook you can add in your parent window object. You just have to inject the interceptor in the iframe load event.

Tuesday, October 22, 2019

Kubernetes NFS encrypted communication: Kubernetes pod applications (as NFS client) and Linux based machine (as NFS server) – secure traffic using Tunnel Over SSH

As we all know, to encrypt NFS share traffic b/w NFS client and NFS server the couple of options are used in general are Kerberos Authentication with privacy (krb5p) Or Tunnel over SSH known as port forwarding.

This article I am going to discuss about the option of Tunnel over SSH with Kubernetes pods application which mount the shard path from the NFS server. In general, Tunnel over SSH implementation is common and easy to implement for the scenarios of port forwarding between two machines NFS server and NFS server. This machines can be either windows or Linux or combination of both.

The challenging part comes into picture for the scenarios with Kubernetes cluster in place and when your NFS clients wants to mount the NFS server shared path into a Kubernetes application. The reason why it’s challenging is because Kubernetes pods does not mount the shared path directly instead it depends on cluster “Persisted Volume Claims” and this would raise a request resource to the “Persistent volume” of the cluster. 

1. RHEL – Linux master as NFS server
2. RHEL – Linux node as NFS client and also maintaining running pods and providing the Kubernetes runtime environment.

A share with name “ NFS_Senstive_Data_Share” will be created in NFS server and which will be accessed from an Kubernetes pod application as an mounted path.

Before we start into implementation, would like to give quick explanation of how tunnel over SSH works with a sample in short.

ssh -fNv -c aes192-ctr -L 2049: SERVICEUSER@NFSServerIP sleep 365d

The above command runs in NFS client takes any traffic directed at NFS client's local port 2049 just forwards it, first through SSHD on the remote server (NFS server), and then on to the remote server's(NFS Server) port 2049. This port forwarding can run as background process which can be running in defined long periods. The user session b/w NFS client and NFS Server will be created by the SSH Session Key pair (RSA public & private keys) and login will happen through the key files instead of typing passwords.

Hoping it would have given a basic understanding of how Tunnel over SSH port forwarding work.

Lets move into the implementation:

Configuring NFS Server and NFS client

Now the Tunnel over SSH successfully enabled, all incoming traffic to NFS client ports will be forwarded to NFS server ports through SSHD.

Few points to notice in above commands
Aes256 – forward forwarding uses AES 256 cryptography algorithm
-f - which makes the port forwarding to run in background ssh persists until you explicitly kill it with the Unix kill command.

Now let's configure the Kubernetes

Configuring Kubernetes persistent volume and claims

That’s all, now just deploy this pod and K8s PV volume files. Once deployment done, a persistent volume within K8s with Tunnel over SSH enabled mount will be created in NFS client (linux node)

Let’s verify things :

First, lets verify the PV volume mount is created in the NFS client (linux node)

[root@NFSClient ~]# mount | grep nfs

You would get an output like

localhost:/NFS_Senstive_Data_Share on /var/lib/kubelet/pods/794ea09e-0354-436d-9498-6038f352e64c/volumes/ type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

and also verify SSH Tunnel is active using below command

sudo lsof -i -n | egrep '\<ssh\>'

Second, let’s try to access the volume mount path inside Kubernetes pods.

[root@NFSServer ~]# kubectl exec -it nfs-in-a-pod -n myproductNamespace -- sh
[root@NFSServer ~]# cd /mnt
[root@NFSServer ~]# ls  ------ here you can see the files inside the NFS shared folder.

That’s all, now the volume mount is created inside Kubernetes POD and the traffic between NFS Server (Linux Mode) and NFS Client (Linux node or K8s pods) are ENCRYPTED !!!

Tuesday, August 20, 2019

Where to store Access Token? For standalone SPA client (Angular)

Authentication implementation for standalone SPA (without a dedicated backend server, see image below) would always have to go through a scenario “Where to store the access token? “on successful authentication and token exchange with identity provider.

Typically, in such scenarios, we are forced to choose either browser storage or browser cookie. The beauty is both are open for vulnerable and it’s up to developers to decide which has higher security counter measures in our application which makes less vulnerable than the other. Period!
If we google to get an answer from experts, we will end up in getting a mixed answer. since both options has its own pros and cons.  This section about the discussion on pros and cons of both options and the hybrid approach which I recently implemented in one of our application.

On high level,

if we proceed with browser storage – we open window for XSS attacks and mitigation implementation.

if we proceed with browser cookie - we open window for CSRF attacks and mitigation implementation.

In detail,

Storing Access token in browser storage:

Assuming our application authenticates the user from backend AUTH REST service and gets Access token in response and stores in browser local storage to do authorized activities.
  • With powerful Angular framework default protection of untrusting all values before sanitizing it, XSS attacks are much easier to deal with compared to XSRF.
  •  As like cookie, local storage information’s are NOT being carried in all requests (default behavior of browser for cookies) and local storage by default have same-origin protection.
  • RBAC on UI side can be implemented without much effort since access token with permission details are still be accessible by Angular code.
  • There is no-limit for Access token size (cookie has limit of ONLY 4KB), it may be problematic if you have many claims and user permission are attached to the token.

  •   In-case XSS attack happened, hacker can steal the token and do un-authorized activities using valid access token impersonating user.
  •  Extra effort is might required for developer to implement HTTP interceptor for adding bearer token in HTTP requests.

Storing Access token in browser cookie

Assuming our application authenticates the user from backend AUTH REST service and gets Access token in response and stores in browser cookie (as HTTP only cookie) to do authorized activities.

  • As it’s HTTP only cookie, XSS attacks cannot succeed injecting script to steal token. Gives good prevention for XSS attacks stealing access token
  • No extra effort required to pass access token as bearer in each request. since as default browser behavior cookies will be passed in each request.
  • Extra effort needs to be taken to prevent CSRF attacks. Though Same Site cookie and Same Origin headers checking gives CSRF prevention, still OWSAP standards recommends having this only as secondary defense. NOT recommending considering as primary defense since it’s still can be bypassed by section
  • Extra effort to implement XSRF /Anti forgery token implementation and validation. (If backend services are still vulnerable for Form action requests). and, need to have HTTP interceptor in Angular client to add XSRF token in request header.
  • Max cookie size supported is 4 KB, it may be problematic if you have many claims and user permission are attached to the token.
  • As a default browser behavior access token cookie are being carried automatically in all requests, this always an open risk if any misconfiguration in allowed origins.
  • XSS attack vulnerability can be used still to defeat all CSRF mitigation techniques available.

Storing Access token in Hybrid approach:

For scenario like, Oauth2.0 flow integration for SPA client (either “Implicit grant flow” or Auth code with PKCE extension flow”) after user authentication and token exchange, the respective identity providers (ex: identityserver 4, Azure AD B2C, ForgeRock..etc) would return access token as HTTP response , it won’t set access token as response header as cookie. This is the default behavior of all identity providers for public clients “implicit flow” or “Auth code + PKCE flow”, since Access token can NOT be in cookie in server side, enabling “Same-site” or “HTTP-Only” properties are not possible. These properties can be set only from server side.  

For the scenarios like above, the only way to store access token is either browser local storage or session storage. But if we store access token and your application is vulnerable for an XSS attack then we are at risk of hackers would steal the token from local storage and impersonate that valid user permissions.

Considering above mentioned possible threats. I would recommend having a Hybrid approach for better protection from XSS and XSRF attacks.

“Continue storing access token in local storage but as a secondary protection or defense in depth protection have session fingerprint check. This session fingerprint should be stored as HTTP Only cookie which XSS could not tamper it.  While validating the access token in Authorization header, also validate the session fingerprint HTTP only cookie. If both Access token and session fingerprint HTTP only cookie are valid then pass the requests as valid, if HTTP only cookie is missing then make the request invalid and return Unauthorized.

In this way, even if XSS attack happened, hacker stolen token from local storage but still hacker can not succeed in doing unauthorized activities. since the secondary defense of checking referenced HTTP only auth cookie hacker would not get in XSS attacks.  we are much protected now!

I would recommend the above Hybrid approach only for the scenarios you have only having a choice of storing access token in local storage or session storage.

But, in case your application has the possibilities of setting access token in cookie at server side after success full authentication. with “HTTP Only”,” Same-site=Lax”,” Secure Cookie” are enabled still I would recommend storing access token in cookie with below open risks.

  •  As per OWSAP standards, “same-site” cookie and “same origin/header” checks are only considered as secondary defense. XSRF token-based mitigation is to be recommend as “primary defense” which again requires developer efforts in each module to implement XSRF token in HTTP interceptor.  or as alternative, you are giving proper justification to live with open vulnerability of having only “secondary defense” as CSRF protection.
  • If none of our GET api are not a "State changing requests", developer not violating section :
  • if we don’t foresee, our token size won’t reach 4KB in future. Current size is ~2KB.
  • If Samesite=strict applied, it would impact the application behavior, since it would block cookie passed in top-level navigation requests too.
  • If None of our backend services supports [FromQuery] and [FromForm] data binding.
  • Teams are justified to live with “Cons” of browser cookie explained in above section.


The debate of choosing whether browser storage or browser cookie would still continue until unless our SPA design has dedicated backend server which would store access token in server in http context and NOT at all exposing access token to browser.
Until then, it’s up to developers to decide in our application which browser storage mechanism has more multi layered (primary and depth in deep defense) protection than other, which makes it less vulnerable other. The decision behind continuing with browser storage explained above and the possibilities of storing in browser cookie with open risks mentioned above.

Tuesday, December 18, 2018

How to build a basic online Taxi service mobile application

In this tutorial, we will see how to build a basic UBER or OLA like functional Taxi booking service mobile application and what it involves to host in google play store.

This is the experiment I did few months back in spare time. Here, will try to explain  technology stack involved in development from end to end and sharing my github repository where you can access entire source code too.

To develop back end and front end, the technology stack I have chosen are

  • Node as web server
  • Node Express JS - as web framework
  • HTML 5 Geo location API /Google map API -  for location navigation
  • Mongo DB - as data source to store cab driver & user information.
  • Socket.IO - as websocket programming library for real-time bidirectional event-based communication for live location emitting
  • RabbitMQ - for scalability as message broker to the subscribed web sockets.
  • Cordova - Hybrid mobile application development framework ( i.e can run in both IOS and Android)
  • Express JS - Building Rest API services backbone.

* Note: I have read in google, currently the Uber architecture also is based on Node Server only.

For any basic can booking service application, the high level requirements can be

As Uber driver login:

  • Ability to sign up and login as driver
  • Create profile Email, Phone number payment account information.
  • Live navigation to cab service booked user pick up and drop location
  • Push notification if new users booked.
  • Ability to view booked users basic information.

As Uber user login:

  • Ability to sign up and login as user
  • Create profile Email, Phone number, etc
  • Live listings of taxi's which are near by
  • Ability to see list of taxi options available ( Mini, Zedan...etc)
  • Ability to book taxi for trip
  • Live navigation tracking of where cab driver coming now
:-) Hope it has basic needed functionalities.

Ok..Let's IMPLEMENT...

Before we start coding, let us first setup the development environment needed to run our application.

Node setup 

if you are new to Node development, you can follow below steps to setup node dev environment in your machine.

Download and install Node from
  • Make sure you added Node JS in your system environment PATH variable.
  • To test installation, open command prompt and type "node ---version". If version number displayed. It installed correctly and working !

Setting Up Database environment

 For user management and taxi booking session management the backend database system I used is MongoDB instance. Considering the advantages, easy Node +  MongoDB integration  and cost fact of cloud service provider with the account I have, i had chosen MonogDB as DB instance, but it is absolutely your choice to prefer a reliable DB system.

For peoples who are new to MongoDB, you can follow my old post related to setting up MongoDB development environment.

Testing MongoDB installation in your machine.

Open command prompt

  • Navigate to “C:\MongoDB\Bin” folder (or the path you have installed MongoDB)  in command prompt
  • Type “Mongod” and hit enter. Now you should be able to see message like “Waiting for connections on port 27017” in your command prompt
  • Open a new command prompt ( don’t close exiting command prompt ) and navigate to “C:\MongoDB\Bin” folder in command prompt. Type “Mongo” and enter. You will be getting a message “MongoDB shell version and connecting to “test” db” Also you can notice in other opened command prompt, the console message got changed to         “Connection accepted from “”…etc”

Setting Up Cordova platform

Corodova is the mobile development platform for Hybrid mobile application development.

For peoples who are new to Cordova Hybrid app development, here are the steps to setup development environment in your local machine.

Cordova behind the scenes running in GIT version control system, so as prerequisites first install GIT in your machine ( ignore if you already have GIT installed in your machine)  Download and install from: Default settings are recommended.

After GIT installation you can install Cordova using Node Package Manager (npm). 

npm install -g cordova

You can confirm the installation by running cordova --version in your command prompt.

Other Dependencies Setup

  • Download and install RabbitMQ from :
  • In command prompt navigate to project folder run below commands to setup the express and mongo client dependencies.    
               npm install -g express
           npm install mongodb -- save   (IMPORTANT : Npm installs latest version of                          MongoDB which does not supprt. db.collection() function, hence update project.json file                to change mongodb version to "^2.1.4")
           npm install --save

All Done, you have the development environment setup.

Run the Code and Mobile APP build 

Clone the source code from my Github repository

In the command prompt 

> Navigate to "C:\MongoDB\bin" and start MongoDB instance
      Type Mongod in one command prompt
      Type Mongo in another command prompt 

> Navigate to "NodeAppDevelopment" folder and run node server file by " node hehaserver.js". This will run the application in your local machine.

> Navigate to "Services" folder and APIServer.js NODE server file, this will run API in your machine.

Host the app in Google Play store.

As prerequisites makes sure your node App and DB instance are running in cloud or in your on-premises system.

To host your application in play store, you would need a google play store licence which may cost of 25 $ 

To build your application for play store hosting ready. you can follow below steps.

Assuming your project folder is C:\MyUberApp

Step 1: Navigate to your Project Folder in command prompt

Step 2: Type "cordova build --release android"

Step 3 : Navigate to build output path: ...\platforms\android\build\outputs\apk

Step 4: Sign apk using below command
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore YOURAPP.keystore android-release-unsigned.apk YOURAPP

Step 5: Compress/zip  apk
Naivate to below path in command prompt:
Run the below command
zipalign -v 4 android-release-unsigned.apk YOURAPP.apk

Now the APK is ready in below path to release in Play store

The above steps will give an APK which you can upload in your play store account and make it LIVE !! .

That's all.. Your Taxi booking service application is now ready. Enjoy riding with your friends !!!

Thursday, November 29, 2018

Moving Azure Application Insights alerts from one app insight to another app insight using Powershell

I was working with a person who works in Microsoft US and helping him as freelancer for Powershell scripting. Recently he had come to me with the task of moving azure App insights alerts/rules from one app insight to another another app insight. He wanted a quick solution since he was having demo with customer in few days. As there was NO straight forward PS scripting approach to accomplish this, I've find a workaround to implement using token and calling azure internal API to do this operation and it worked like a charm !.

Here is the script which I wrote:

Tuesday, January 30, 2018

How to Create Azure AD B2C app programmatically using Powershell or Graph API

We've had critical business need to create or register native /web client applications on Azure AD B2C blade programmatically. Currently Graph API or Powershell cmdlets supports creating applications only in AD blade, NOT under B2C blade (V2 app), we've even tried with MSGraph API, though MSGraph post API was able to create application in the B2C, the application was getting created as "faulted app"(useless). We approached Microsoft support team many times and  all the times the response from them was "NO, currently we are NOT supporting B2C app creation programmatically". We even approached PM's in Microsoft and got the same answer, saying this functionalities will be available only be next year (2019) - insane ..right

As this was a critical need for us, I was forced to find atleast a temporary workaround for this and here's what I tried and worked for me.

As  we know the only way to crate B2C app is creating to manually from as a first step I tried to mock that activities using Powershell.

So, using fiddler first I try to capture the access token and Azure internal API which being used while doing it manually from portal.

With JWT token captured through fiddler, I decoded token and get the "Audi" audience information which is the resource server which accepts the token. Then using RM context refresh token grant type, I invoked a rest method which will give API token for the particular "aud" or resource which we get in previous steps.

Now, with this internal API token, I invoked "" to create B2C app... Below is the same code.
Note : This is just a temporary workaround for this , not recommended for production.