Monday, October 8, 2018

ForgeRock DevOps for Mac Cheatsheet

Please enjoy this cheatsheet on how to configure a Mac to have an environment that is running the ForgeRock stack in a local Kubernetes environment.

When completed ForgeRock stack will be running and accessible from a web-browser.  In addition, Helm charts will define the Kubernetes deployment environment that runs inside of a virtual machine on the Mac.  Lots of stuff running locally, so we need to allocate at least 4GB of RAM for the virtual machine.


Here we go....

First acquire all the software needed per our DevOps release notes.  In the end you should have these versions or later.
Install the software listed in the following table on your local computer:

 Software Version
Docker client18.06.1-ce
Kubernetes client (kubectl)1.11.3
Kubernetes Helm2.11.0
Kubernetes context switching utilities (kubectx and kubens)0.6.1
Kubernetes log display utility (stern)1.8.0
VirtualBox5.2.18
Minikube0.28.2



Rather than download each of the above using the reference URL.  This guide installs each one using HomeBrew package manager.
Choosing that route will greatly assist with dependencies for now, but more important as needed for updates.

Recommend following this approach.  Above chart is more for a checklist and version awareness, rather than a to do list for each of those URLs.


Homebrew: Third-party package manger for Mac OS.  


/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"



brew update



Virtualbox: VirtualBox is a x86 and AMD64/Intel64 virtualization product

brew cask install virtualbox

verify installation
virtualbox --version



Kubernetes CLIs: is a command line interface for running commands against Kubernetes clusters
brew install kubernetes-cli kubectx

kubectl version

At this point a connection error message is OK, because the step to configure a Kubernetes cluster is not yet configured.  The important part is that a valid client version is returned.
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout

Note:  At this point there should be three utlities: kubectlkubectx and kubens.  These all need a running Kubernetes environment that is attached to provide any utility.  Further work with these tools will be below after we have started and attached a running Kubernetes environment (MiniKube).


Kubernetes Logging Utility: a third-party logging utility for debugging
brew install stern

stern -v






Minikube: Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop.

brew cask install minikube

minikube start --memory 8192
You can also provide minikube with the memory you want to allocate to the VM.  If you are going to run the whole ForgeRock Platform then it is prudent to give it at least 4GB RAM, 8GB would be better.  For example "minikube start --memory 4096".  Similarly you can also specify the number of CPU via "--cpus" switch.  For a complete list of options run "minikube start --help". The "--memory" switch takes an integer in MB.
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Output above is from first-time run.   After this time, minikube will start quicker.

kubectl version

After a successful minikube installation kubectl is configured for the cluster.
Try again kubectl version and see no longer a connection error, because minikube is running and configured for the CLI.
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:30:58Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", 
GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
At this point VirtualBox is running and Minikube VM is running under it's control.  Because VirtualBox was started VIA the minikube CLI, then it is running headless.  This is fine for day-to-day usage, however to change settings of the Minikube VM environment the easiest way to set those configuration options is to launch the console, make desired changes and save those settings for the VM. 


minikube ssh sudo ip link set docker0 promisc on
To run the DevOps Examples successfully on Minikube, you must work around a Minikube issue. Run the following command every time you restart the Minikube virtual machine to enable pods deployed on Minikube to be able to reach themselves on the network:


Testing Kubernettes Context Switching CLIs: While minikube is running, other KubeCLI command can be run to test.

Test name services
kubens

default

kube-public
kube-system





Test context tool
kubctx

minikube










Helm: Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

brew install kubernetes-helm

helm init

Creating ~/.helm 
Creating ~/.helm/repository 
Creating ~/.helm/repository/cache 
Creating ~/.helm/repository/local 
Creating ~/.helm/plugins 
Creating ~/.helm/starters 
Creating ~/.helm/cache/archive 
Creating ~/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at ~/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!


helm version

Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53a

helm repo update

Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

minikube addons enable ingress
helm plugin install https://github.com/adamreese/helm-nuke 



ForgeOps: Docker and Kubernetes DevOps artifacts for the ForgeRock platform.

helm repo add forgerock https://storage.googleapis.com/forgerock-charts 
helm repo update
helm install forgerock/cmp-platform --version 6.0.0


NOTE: 'looping-bronco'  reference in below text, is assigned randomly at startup time of this specific Kubernetes cluster based upon the helm-charts referenced above.
Your environment will have a different (and often comical) name for reference every time the cluster is started.  This name is used for administration purposes.
NAME:   looming-bronco
LAST DEPLOYED: Sat Aug 25 17:05:56 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
amster                  1        1        1           0          2s
looming-bronco-openam   1        1        1           0          2s
looming-bronco-openidm  1        1        1           0          2s
looming-bronco-openig   1        1        1           0          2s
postgres-openidm        1        1        1           0          2s

==> v1beta1/Ingress
NAME     HOSTS                        ADDRESS  PORTS  AGE
openam   openam.default.example.com   80       2s
openidm  openidm.default.example.com  80       2s
openig   openig.default.example.com   80       2s

==> v1/ClusterRole
NAME                   AGE
looming-bronco-openam  2s

==> v1/Service
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)                              AGE
configstore            ClusterIP  None            <none>       1389/TCP,4444/TCP,1636/TCP,8081/TCP  2s
openam                 ClusterIP  10.105.49.142   <none>       80/TCP                               2s
openidm                NodePort   10.108.41.237   <none>       80:32299/TCP                         2s
looming-bronco-openig  ClusterIP  10.108.212.244  <none>       80/TCP                               2s
postgresql             ClusterIP  10.99.43.243    <none>       5432/TCP                             2s
userstore              ClusterIP  None            <none>       1389/TCP,4444/TCP,1636/TCP,8081/TCP  2s

==> v1/PersistentVolumeClaim
NAME              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
postgres-openidm  Bound   pvc-a93a0402-a8aa-11e8-9865-080027943f97  8Gi       RWO           standard      2s

==> v1beta1/ClusterRoleBinding
NAME                   AGE
looming-bronco-openam  2s

==> v1beta1/StatefulSet
NAME         DESIRED  CURRENT  AGE
configstore  1        1        2s
userstore    1        1        2s

==> v1/Pod(related)
NAME                                     READY  STATUS    RESTARTS  AGE
amster-5ccd84cc5b-gpwpx                  0/2    Init:0/1  0         2s
looming-bronco-openam-78fc5db98c-6fvz7   0/1    Pending   0         2s
looming-bronco-openidm-6cf9d5bdd6-snj2k  0/2    Init:0/1  0         2s
looming-bronco-openig-54ddf86f4c-t555w   0/1    Init:0/1  0         2s
postgres-openidm-6f86c8f6cc-qqgsj        0/1    Pending   0         2s
configstore-0                            0/1    Pending   0         1s
userstore-0                              0/1    Pending   0         1s

==> v1/Secret
NAME              TYPE    DATA  AGE
amster-secrets    Opaque  4     2s
configstore       Opaque  4     2s
openam-secrets    Opaque  9     2s
openidm-secrets   Opaque  2     2s
postgres-openidm  Opaque  1     2s
userstore         Opaque  4     2s
git-ssh-key       Opaque  1     2s

==> v1/ConfigMap
NAME                    DATA  AGE
amster-config           2     2s
amster-looming-bronco   8     2s
configstore             9     2s
am-configmap            7     2s
boot-json               1     2s
looming-bronco-openidm  7     2s
idm-logging-properties  1     2s
idm-boot-properties     1     2s
looming-bronco-openig   2     2s
openidm-sql             6     2s
userstore               10    2s


NOTES:

ForgeRock Platform

If you are on minikube, get your ip address using `minikube ip`

In your /etc/hosts file you will have an entry like:

192.168.100.1 openam.default.example.com openidm.default.example.com openig.default.example.com


Get the pod status using:

kubectl get po

Get the ingress status using:

kubectl get ing



When the pods are ready, you can open up the consoles:

http://openam.default.example.com/openam
http://openidm.default.example.com/admin
http://openig.default.example.com/

minikube ip

returns your environment's IP address such as:
192.168.99.100

assuming the above IP is returned edit localhost DNS such as /etc/host to reflect this for the DNS specified in the helm-charts.

sudo bash
echo "192.168.99.100 openam.default.example.com openidm.default.example.com openig.default.example.com" >> /etc/hosts

ping openam.default.example.com

PING openam.default.example.com (192.168.99.100): 56 data bytes
64 bytes from 192.168.99.100: icmp_seq=0 ttl=64 time=1.042 ms
64 bytes from 192.168.99.100: icmp_seq=1 ttl=64 time=0.795 ms

kubectl get pods

NAME                                      READY     STATUS    RESTARTS   AGE
amster-5ccd84cc5b-gpwpx                   2/2       Running   0          26m
configstore-0                             1/1       Running   0          26m
looming-bronco-openam-78fc5db98c-6fvz7    1/1       Running   0          26m
looming-bronco-openidm-6cf9d5bdd6-snj2k   2/2       Running   0          26m
looming-bronco-openig-54ddf86f4c-t555w    1/1       Running   0          26m
postgres-openidm-6f86c8f6cc-qqgsj         1/1       Running   0          26m
userstore-0                               1/1       Running   0          26m

*****  NOTE:  It will take a while first time running to have all of the instances set to RUNNING state.   Be patient.


kubectl get ingress

NAME      HOSTS                         ADDRESS     PORTS     AGE
openam    openam.default.example.com    10.0.2.15   80        28m
openidm   openidm.default.example.com   10.0.2.15   80        28m
openig    openig.default.example.com    10.0.2.15   80        28m


Launch a browser and point it to http://openam.default.example.com/openam

At this point ForgeOps is configured for local execution under MiniKube.

Done!




Optional:  Notice that after all of this, there was never an installation of Docker itself.  That is because, Docker is managed by Helm inside of Minikube.   We can however install Docker client on the Mac and use it to work directly with the Docker images running in Minikube.  To do so, we need to configure Docker client to point to the Minikube environment.

brew install docker

Validate installation
docker -v

Retrieve settings for Docker running inside of Minikube

minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="~/.minikube/certs"
export DOCKER_API_VERSION="1.35"
# Run this command to configure your shell:
# eval $(minikube docker-env)                            1/1       Running   0          26m


The command will display the export commands that need to be executed and the 'eval' command that can be uncommented and run as well.
For example:

export DOCKER_TLS_VERIFY="1"

export DOCKER_HOST="tcp://192.168.99.100:2376"

export DOCKER_CERT_PATH="~/.minikube/certs"

export DOCKER_API_VERSION="1.35"

eval $(minikube docker-env)

Now, Docker client commands will execute against the Docker server inside of Minikube.
To validate:

docker ps



Cleanup and Starting Over: These commands will assist in deleting environment components such as images, resources, pods, etc.

Note section below is destructive use with caution.  In most cases these are not needed just to start fresh, these are the last resort in case environment is truly hosed.


minikube stop
minikube delete
helm nuke
rm -rf ~/.minikube/
rm -rf ~/.helm/



docker stop $(docker ps -a -q) #This command will shutdown images when Docker server is running local.  If under control of Kubernetes, they will auto restart by design.
docker rm $(docker ps -a -q) #This command will shutdown images when Docker server is running local.  If under control of Kubernetes, they will auto restart by design.




Summary
At this point it is important to understand what we have accomplished.
  • Virtual Machine configured with a VM for Kubernetes [MiniKube].
  • Docker containers are executed in local VM that is running the Kubernetes cluster.
  • Desktop has download CLI tools for managing VirtualBox, Kubernetes, Helm and Minikube.
  • Enabled Ingress in Kubernetes environment and added some supporting function plugins into that same environment.
  • Added Helm Chart repository from Google Cloud Storage.
  • Executed Helm Chart from above repo, which then deploys to currently configured Kubernetes environment [local VM MiniKube]
  • Sample configuration with sample images are deployed and running.

Next step is to create custom Docker images based upon these samples, create custom configuration based upon these templates, deploy them to repositories and use those repositories for deployment in this Kubernetes cluster, replacing the default samples with custom config/DNS, etc.




--- END ---