Being that the webpage is starting to get a lag from maintaining so much data, I will start publishing these in chunks rather than gigantic posts, and I cannot see myself breaking these down into smaller posts in the future – I will likely just go more in depth.
With that I will get start with describing Development Models / Environments / Stages of Development / Environment Architectures (Bare Metal / Cloud / Etc).
Review of the different Development Models and Platforms for Deploying Apps
Application Development takes place generally in four different Phases of development which is Development / Testing / Staging / Production, with each Phase either getting closer to mimicking the final Environment or getting further away from them.
This means if the Application will live on a “Bare Metal” Hypervisor or in a Hosted Cloud Server, that is not where the Development itself takes place, the following environments are what are generally used for these 4 phases of Application or Software Development:
Development Environment
This will generally barely consists of the final environment that the Application will live in, and is generally the core Development is done in an Environment such as a Container like “Docker” or a Cloud Network Environment where there are no underlying services as this allows the Developer to focus on coding the Core Application itself.
Testing Environment
Once the Core Development / Coding is finished, the Application or Code may be moved into a Testing Environment that is structurally similar to the destination environment with underlying OS Services being introduced to test how the code lives within it.
There are some application / code testing tools such as Jenkins, CircleCI, Travis CI, and Gerrit which is aimed more at code “review” than testing.
Staging Environment
Once the Code is Tested and Reviewed it is moved into a Staging Environment that very closely resembles the destination environment, some companies will actually run two production environments for this very reason and refer to them by names such as “Red” and “Blue” where both Environments push Production traffic however one may have a smaller scope or be less “mission critical” to the company.
This is so the company doesn’t have a Stage Environment that might not perform the same as the Production Environment, it is a live Production Environment that User Traffic can be gradually cut over into as the application proves it is performing well when a new feature or update is pushed out.
This option is extremely more probable using Cloud Environments where unused resources do not have a cost to the customer, and virtual environments can be torn down and rebuilt rapidly without worrying about physical appliance issues.
Production Environment
This is when the software is in use by end users in production, at this time it should have been tested multiple times to ensure it will integrate smoothly, and while there may be some issues not foreseen such as traffic surges (heavy network utilization) this can also be planned for when planning the Infrastructure that the application will utilize.
Review of Deployment Models / Environments from past to new for Development
These are not Cloud Types, these are only the Models / Environment types!
As I go down this list, one thing to note is the level of abstraction between the Framework or Infrastructure in the Environment / how attached it is to the physical Host Hardware Appliance, and how we as developers move towards that Abstraction!
Bare Metal
This is a Cisco UCS, Cisco ESXi Stand-alone Server (non-vCenter), anything that hosts a Virtualized Environment in a single platform (Not a Host with multiple VMs) to be considered a Bare Metal Cloud.
Its weakness is that its constrained to its Physical Resources (no infra on demand), it is susceptible to total Cloud failure if the hardware fails, repair time has delays from buying parts / server warranty / etc.
Benefits are direct Access to Hardware Resources for Applications to avoid over-committing, direct access to physical resources means better performance and response times for users, also total control of the security of this cloud type.
Virtual Machines
These come in two flavors of Type 1 which is a HyperVisor Platform that is running its services directly on the Bare Metal Host (Hosts), and Type 2 which run generally as an Application within the Hypervisor Software (Guests / VMs), this Environment type of “Virtualization” still has restrictions of the Physical Appliance Hardware Resources however allows them to shared across the Type 2 Guest VMs which provide Environments that can be used for testing different functionality or how the Application performs if different Environments such as Windows or different Linux Distros.
The upside is that Applications run inside this Environment are bound to the Guest VM, and only allowed to interact with the Network if configured to do so within the VM’s Network Configuration settings.
Downside again is possibly over-committing virtual resources, and hardware failure that results in down time for entire Environment.
Also resources are allotted to the VM from the Host Hardware Resources, so if the environment for whatever reason implodes / wrecks the Guest VM, it can be deleted and a new VM spun up with different resources to address issues so you are not melting physical appliances to the ground testing an application not knowing how it will interact with a certain Environment / OS / Underlying Services of the Framework of the OS.
If one environment is more successful or proves to be a better image (say different Windows Version Builds) it can be saved as a “Golden Image” for later use.
Container-Based Infrastructure
Containers are things like Docker, AWS and Azure Cloud based Containers, along with many other I am sure – Though I will likely focus on Docker during this course but Azure is on my roadmap of studies not too far down the road!
Containers add a MAJOR Layer of Abstraction by running Applications within segregated environments such as Type 2 Gust VMs within a HyperVisor Environment, however it removes the OS Framework and underlying services entirely. This makes Containers much less resource intensive and much faster to load because there is no OS to load!
With VM’s they are emulating an entire Guest OS, where Containers share the OS of the Host Machine it is running on but uses specific binaries and libraries to run Applications, which means that Applications are running within their own ‘Containers’ which only consist of the Application or a group of Applications.
These Containers include their own Container-Specific Binaries and Libraries, so the user is interacting directly with the Application cutting the OS out of it entirely, and because each Container has its own set of Binaries and Libraries running multiple Applications that utilize the same Libraries will not compete for resources or run into conflicts.
Containers are also useful because of the “Ecosystem” of Tools that are being developed for them, such as Kubernetes, which simplifies the Orchestration of Containers making them easier to work with.
Lastly and an important point, that Containers are the foundation of Cloud Native Computing where Applications are considered Stateless, which this Statelessness makes it possible for any container to handle a request – This leads into the final phase of Abstracting the Development Environment from the Hardware Host and OS.
Serverless Computing
Serverless Computing reminds me of the Server equivalent to MPLS L3 VPN, in which it seems like you are sending traffic from one Private LAN Address space to your remote offices Private Address Space – But of course there is the Internet between Point A and B.
The same is true with “Serverless Computing” in the way that Servers do indeed power the computing but they are completely invisible to the end user, hence the name “Serverless” although there are of course Physical Devices / Servers somewhere providing these Computing Services.
The aim of Serverless Computing is to make “on-demand” services available without the need from programs, but is more geared towards Applications that are built around Services, so that the Application can call these Services as needed.
How Serverless Computing works:
- Create an Application
- Deploy created Application to a Container so it can run in any Enviroment
- Deploy the Container to a Serverless Computing Provider (AWS, Azure, Etc) however utilizing these services will likely have a time limit of allowed Inactivity until it is “Spun Down” (but will immediately spin back up upon Function call)
- When necessary your Application calls the Function (Services for task)
- Provider Spins Up an Instance of the Container, Performs the Task, and returns the completed task back to the Application
This is very similar to Azure or AWS where you can spin up entire Virtual Environments, however this is not Infrastructure as a Service, this is for computing / running Application Functions on Demand (it is strictly for running Tasks and Services associated with those Tasks for an Application only).
Starting at Bare Metal running a single Kernel / OS with one pool of resources to draw from, we are now running Applications within Containers that allow them to work in ANY environment, which uses on-demand resources that only cost money as they are used rather than paying for Power / IT Staff to Maintain it / Upgrades / Bug Fixes / Downtime from all types of issues. (This is referred to as “Elastic” or “Scalable” because all the resources you need are available though you will be charged for them)
Imagine standing on the planet Mars with no Atmosphere containing Oxygen, and instead of building some kind of Atmosphere on it so you could breath, your lungs reach out to Earths Atmosphere per breath you need and it is always available!
That is the equivalent of Serverless Computing coupled with Containers, amazing!
However coming back to Earth for a moment, the flip side of the coin of having zero control or visibility to Server introduces the concept of possibly trading off security or at very least your own control over it – Leaving it up to the Martians providing the Oxygen!
Review of different types of Infrastructure to host these Environments
Having started from the beginning of time in terms of “Abstraction” above it is important to consider the Infrastructure of these Environments and their evolution, and so you can determine which options best suits your need (especially if you have limited options).
Just like in the beginning if you wanted to run an Application you had to run it on a PC, and if you wanted multiple PC’s to talk via an Application you had to put them in the same Network for the PC’s to communicate, and thus On-Prem Infrastructure was born.
On-Premise Infrastructure
This is literally the Infrastructure within your building which is rapidly going away as older on-prem infrastructures which is quickly vanishing in terms of Servers / Phone Systems / Exchange Servers, however with the cost of Downtime / Emergency Fixes / Upgrades / Server Refreshes when they hit End of Life, Etc.
If more resources are needed they need to be ordered which is a delay / cost / possibility of needing to upgrade the Server itself to handle the expanded memory or cards, etc.
For this reason many customer have moved their systems to a “Cloud” that they pay a pretty standard fee for (depending on the type of cloud / service agreement), and if more space is needed it is as easy as the Cloud Provider adding more resources to the Cloud for a cost without any surprises or downtime (in most cases).
Private Cloud Infrastructure
A “Private Cloud” is a fairly limited clouds that offer services for example my current Employer Marco has a Hosted Data-Center (which I am actually on-call for this week to support), where customers pay for IaaS / SaaS / VaaS / UCaaS / Something as a Service, so for example most SMB companies cannot afford to have their own small Data-Center or RDS Farm off-site so we offer these services along side our “Managed IT” Services.
This type of Cloud solution / offerings fit a smaller business great, because they don’t want to continue buying / upgrading / maintaining / paying for on-prem Servers or Phone Systems, as if there is a fatal software error you need an IT Engineer on site with a Linux Boot Disc to rebuild the Host and Restore the VMs (hopefully you had off-site backup jobs successfully running) to restore the last known best state, or if there is a hardware failure that you need to wait for a part to be shipped overnight.
Generally the customer will have a leased VPN Endpoint to ensure an encrypted connection to the Cloud, or may only need RDS services, in wish they would just run RDS OTT (Over The Top AKA Over the Internet), with something like Two-Factor Authentication for Security rather than full blown encryption.
Most modern businesses cannot just pause working, tell customers their servers are down so they can’t take orders, this impacts employees that need consistent pay checks and employers who need to provide those pay checks or risk losing employees.
So On-Premise, outside of networking equipment for device communication (which is a necessary evil I don’t see going away any time soon but maybe changing), Servers are either in a Private Hosted Cloud Service or they might go directly to a Public Cloud Service themselves and hire in-house IT Staff to maintain connectivity.
Public Cloud Environment
Public Clouds can run systems such as OpenStack and Kubernetes, or they may be more Proprietary like more well known names such as Azure or AWS Cloud Services.
These providers also range in Secured connection options from Site-to-Site VPN Configurations which I actually see more with AWS, while Azure leverages what is called SAML (Security Assertion Markup Language) which leverages IdP (Identity Providers) to verify Authorization to Resources as discussed in the API Security section (though SAML is an Open Standard).
The advantages is of course pay as you use resources, on demand resource expansion, and one huge benefit is redundancy in the form of having Data-Centers in different Regions of the world so you may be using one that is lightning fast in your Region – But if a Natural Disaster or whatever takes it offline Public Clouds generally have total Redundancy so you won’t lose your work if the Data-Center in your region loses power for more than maybe relogging in (maybe) and a slightly slower response time.
The drawbacks are what is called a “Noisy Neighbor” problem, in which you may end up contending with a tenant of that same cloud for resources, which is generally caused by a Cloud Provider “over-committing” resources assuming that not all tenants are going to fully utilize their allotted resources – I am not sure how much this happens but it can.
Another maybe more real world issue I’ve troubleshot time and time again is customers with countries blocked except USA / Canada / Mexico, not realizing the website they are visiting hosts their content on a Public Cloud somewhere in Brazil which they have blocked – Which can be a real pain to troubleshoot unless you have run into it enough.
The Noise Neighbor / Overcommit issue becomes a problem because a Public Cloud provider might be in the details of what is promised, such as offering x amount of CPUs, those being Virtual CPUs rather than Physical CPUs which do not deliver a 1:1 performance for example if you have an i7 Quad-Core Processor in your laptop with 8 vCPU Cores that CAN be allotted does not mean the PC will perform like it has 8 dedicated cores that can run at full speed – They have 8 Virtual Cores that can run 8 different process at half-capacity.
Hybrid Cloud Infrastructure
Not to be confused with “Multi-Cloud” Infrastructure where a customer will use multiple Cloud Providers for different Applications!
^^ Seems like another Cisco’y exam difference to know as well.
Important Note – Hybrid Clouds are only Private Clouds that utilize Public Clouds Resources, this is not Bare Metal / On-Prem to Off-Prem “Clouds” are not Hybrid, must be off-prem – Private to Public for one Application = Hybrid!
A Hybrid-Cloud Infrastructure is one that utilizes one cloud, that then utilizes another cloud, for example a Business might have O365 as their Cloud Email Service Provider, however that email might flow through a private clouds Email Spam Filter system first.
In this scenario when they send or receive an email from one cloud, it is sent over to the other cloud to be inspected for malware or suspicious activity in either direction coming from or going to the Customers Cloud Hosted Email Service.
From my learning module it notes that the decision making is best left to the applications handling traffic and not the end user application, and that services like Google Cloud that offers basically any service there is (Google Suite) which can interconnect all different services in one Cloud Platform, also Container Orchestrators are gaining popularity which provides a “Cloud-Agnostic” layer which allows the Application can consume to request necessary resources which reduces the environmental awareness of the application itself (It helps the application work with its cloud platform).
Edge Cloud Infrastructure
This is the newest type of Cloud as of this writing growing rapidly in popularity with the growth of IoT, by giving faster or as close to immediate response times as possible, like when you tell Alexa to tell you how far away the Earth is from the Sun in inches, you expect Alexa to respond back immediately.
In a more serious manner that many I am sure can relate to is latency or lag when you are playing something like Call of Duty or Elder Scrolls Online Online, half a millisecond can make the difference between who shoots who in CoD or if your healer saves the group from a dungeon boss before its special attack lands as you cast your heal.
Edge Computing is achieved by bringing the Cloud physically closer to Regions, with smaller but adequate Cloud Data-Center locations, with a much MUCH larger Public Cloud somewhere off in the Internet.
Almost like a Hub-and-Spoke network, Edge Cloud Infrastructure is like a Hub-and-Spoke network where the Hubs are Data-Centers branching from the middle out in every direction, and when you connect to an Edge Cloud Infrastructure based service where a ms response time matters the Provider will assign the closest Edge Cloud Hub to you physically for the fastest possible response time.
These hubs can either entirely process requests, or will continuously process and feed data back to the Public cloud to get a response back ASAP, however these types of clouds must have absolutely Maximum Power Hardware / Resources / Network throughput so you will likely not get this type of immediate Edge Cloud service if you live in a Rural Area (though if your ISP only provides 10mbps DSL as its fastest speed you probably are not dominating CoD matches or surviving many Elder Scrolls Online Dungeons) 🙂
Docker Container
This is the most popular Container solution, and can run on Windows as well as Linux, but must have WSL to allow it to work in a windows OS environment. Docker itself is not the Container, but it is the Software that can Create and Hold Containers.
Docker Components that Docker “Wraps”
Namespaces – Namespaces isolate the different elements of the docker container, by mapping pid (process id) / the filesysem is isolated in the Namespace mnt (mount) / networking isolated in the net namespace
Controlgroups – cgroups are a standard Linux mechanism to limit resources used such as RAM / Storage / CPU used by the application
Union File Systems – UnionFS is a File system built layer by layer, combines resources
The Docker Wrapper (made up of Namegroups, cgroups, and UnionFS) is used to create an image, which can be thought of like a template, when that image is needed Docker is able to create an Instance from that Image, and the Image can be stored in DockerHub
The Workflow of creating a Docker Container (using a “docker file” :
- “docker build” to use a local image or in the Docker Registry, or pull copy of image from a registry using “docker pull”
- “docker run” or “docker container create” to run a Container based on the image
- Docker daemon first checks for local copy, then pulls image from registry if not found locally
- Docker daemon creates a container based on the image, if “docker run” is used it logs into it and executes the request command
“makefile” is a command that utilizes the Make utility to run the file defined to build the Application in C Language, this is what the command “dockerfile” in Docker does, its a simply text file (kind of like creating a requirements.txt file while inside a VenV using “nano requirements.txt” and listing the application==build# like 2.6.3
My Ubuntu VM died (Thanks Hyper-V) so I had to run Ubuntu in Docker
Total credit to Network Chuck on his video on this subject which can be found here:
https://www.youtube.com/watch?v=eZpLjKv9xvA
I grabbed a screenshot of the Powershell commands to “pull” and “run” Ubuntu:
“docker pull ubuntu” downloads the latest distro of the flavor so the image is now on the machine, and then the command “docker run -t -d docubuntu ubuntu” with -t opening a TTY Line and -d running it in a Detached or Headless mode (Like how Raspberry Pi can run in “Headless mode” on the network by using VNC to connect).
Then there is “–name ‘somename’ ubuntu” to name the VM and then the image
To verif “cat etc/os-release” shows that this pulled down Ubuntu 20.04 LTS:
So catastrophic VM Meltdown get in the way of your studies? No problem!
Then you can simply type “exit” to return to the PowerShell prompt, and “docker stop docubuntu” to stop the VM, and even “docker -rm docubuntu” to just delete it.
It turns out that Hyper-V meltdown led to a good example of spinning up a VM in Docker for the very first time, which this information was actually in the Cisco Study Group module as well and Network Chuck nailed the topic perfectly – So hats off to you Chuck!
A look at the entirety of “docker –help” in Powershell to see all available options
I have to say that has intuitive commands, with great help descriptions, you go Docker!
After some contemplation, I decided its just a better idea to run Docker in Ubuntu
I am sure there is some way to get a Docker Bash shell in Windows (or I assume), but after losing more study time to my on call hours for work, I decided to rebuild my VM with 18.04 LTS Ubuntu as it seems to run MUCH smoother in Hyper-V.
Back to Docker now in Ubuntu Bash – Building a Docker Template!
A Docker Template is creating a Docker File to be called upon to spin up a container, which this first example shows an Ubuntu “Template” that is a single line.
VERY IMPORTANT DETAIL I FULLY EXPECT TO SEE ON EXAM DAY BELOW:
Just as quickly as I got off to a good start, I quickly learned my first mistake trying to make the most basic Docker Container I could in Ubuntu by using the wrong name(!):
A Docker File like this MUST be called “Dockerfile” exactly, my captilized F in File makes this simple Docker not recognize it as a Docker Container and will throw this error!
Inside of this Container, since I am running Ubuntu so it can pull the image for a Container from the System, it SHOULD create an Ubuntu Docker Container:
As soon as I adjusted this single line file to “Dockerfile” and ran with the proper name:
This is quite literally the most basic tip of the Docker Iceberg info, but in terms of spinning up a Container using “build -t SomeName .” to use the local Docker File to create the container (using “.” to define the image to use) it MUST be “Dockerfile” with no extension like Dockerfile.txt – On exam day if you see “.” being used to reference an Image to spin up a Docker Container if it is ANYTHING other than Dockerfile its wrong!
After installation I verify the containers present with “docker images” then run it with “docker run -it (name) /bin/sh” to jump into the container as shown here:
Literally just a one list “FROM ubuntu” on in a Dockerfile spins up a Docker Container, and to back out simply type “exit” in the container shell.
^^^ The above demo is a clean OS of Ubuntu, but to build a real template inside the Dockerfile, it needs to be filled in to spin the container we need with lots of stuff!
Looking at the “docker –help” from PowerShell above it looks like those same commands apply to Linux as well, so it may very well have worked but away we go!
Below is an example of a Docker Container / Template that is a bit more robust:
A quick run down of this template in a line by line way:
- “FROM python” invokes a default image from Docker Hub with Python installed
- “WORKDIR /home/ubuntu” tells Docker to create this directory in Linux
- “COPY ./sample-app.py /home/ubuntu” tells docker to copy the Python script that is the image we want to run to /home/ubuntu
- “RUN pip install flask” tells Docker to install Flask to the container
- “CMD python /home/ubuntu/sample-app.py” tells Docker to run the image to spin up the Container using this Python Script
- “EXPOSE 8080” tells Docker to this container is listening on Port 8080 which is the port Flask listens on, for other Apps / Services it would be changed, like a Web Server would listen on / EXPOSE port 80 or 443
I made “sample-app.py” with “print(‘Hello World!’)” for the sake of a demo running this:
That is how Docker Templates are created (a very simple demo) and verified here:
I am actually kind of surprised that worked, that is one awesome Docker Container! 😀
All study fried humor aside, this is what is meant as a template, is it will build a Container that you can then “docker run -t (something)” if it is in the “docker images” and this is what makes “Dockerfile” a Container Template because it builds containers by the parameters specified within the template with “docker build -t (something) .” it calls on the Dockerfile to build the Container!
Running a Docker File locally in Ubuntu is similar to my example in PowerShell
This is running it in the “Headless” mode, only with this image containing a Port it is listening on, that would need to be defined within the “docker run …” command:
It does indeed run the Container as shown by the hashed output, but because the “image” of the Python script is a single print statement, it does not show in “docker ps” after being run as the Container will print “Hello World!” and that is it as seen above.
However if I run an actual container locally with that works we can see more:
Here it actually shows a Container that is running, I didn’t need the -P as it is not listening on any ports (I figured it was worth a shot), so unfortunately this demo doesn’t show the inside and outside ports – But it does show details from all running Containers.
If this were a webserver listening on Port 443, the Port value would look something like:
0.0.0.0:28754->443/tcp
This acts almost like a NAT on a Firewall, the 443 being the Inside Docker Port while whatever randomized Port # is shown is used to listen on the outside, so that multiple Containers can be running on the same Port (443) Inside their Container while outside of it is a randomized Port # so the Container Ports don’t conflict with each other.
Based off this information, you could make a connection from the local host using curl:
curl localhost:28754
Being that 443 is “Inside” the Container itself being run, and its being run on this local host, “curl localhost:port#” to the outside randomized Port # to call this container and verify it is responding on that port with a reply like “You are calling me on (IP addy)” where the IP is that of the host machine.
Speaking of being like NAT, you can also specify a unique port to internal port mapping so you know which port you are trying to hit, literally a NAT config for the Container:
docker run -d -p 8443:443 –name (somethingelse) (ContainerName)
This will run the Container with an assigned unique port rather than a randomized port, so you can standardize (or know without checking) which port is being used to talk to the Container that is being run.
I had a couple of Derps trying to “stop” and “remove” this container shown below:
First it shows that I need to use a “name” to stop or remove a docker Container, so to “stop” the Container I had to use the randomly assigned name (because I didn’t assign one use –name (something)” to add a name to the Container, and one thing that is still kind of boggling my mind is that is seemed to allow me to remove the container using “sudo” in the remove command prompting for a password but then: No such container.
Looking again back to my PowerShell Notes, I notice “rmi” is to remove an Image, and those repositories listed (Containers) have an IMAGE ID, so I am going to attempt to remove my “sample-app-image” and hope not to kill my VM :
I am sure there is a way to force this to stop with for example “docker kill …” but for the sake of keeping the forward movement I am going to just leave this one alone for now 🙂
How to save a Docker Image to a Registry / Repository
This is one I’ve done in my Windows environment at https://hub.docker.com shown here:
Which I have absolutely never used beyond creating it and linking it to Docker running on my Windows 10 environment via WSL2.0 – So lets put it to use by first logging in:
Success!
This Repository is similar to GIT / GitHub without the timelines / branches / conflicts like GIT does (that I am aware of), I will see later down the road if I can play with this to see if Containers with the exact same name is pushed to this Repository if it would over-write the original Container or how it would handle that – But for the sake of making progress again I don’t have time to bleed! (or play with my new IT Toy just yet!)
I did get myself in trouble and have to close the Terminal, but it remember my info and logged me back into Docker Hub just by issuing the command when back in Terminal:
That could be a good think, or very possibly a security threat, something I hope to cover later in this weeks studies (loads of awesome stuff!) – So wanted to demo this behavior.
How to find your local Docker Container Names that aren’t running to work with:
I am boarder lining brain failure from exhaustion here, so that was very hard to find to work with, as I want to commit / push my Containers to my Repo, but you need the name which is not displayed in “docker ps” if they are not running!
Now that I have the names here lets see if I can Push or Commit some stuff out to my Repo, as I am so close to the end of Docker contents of this module, and I have so much more to go in this weeks module over the weekend – Woohoo!
This took a LOT of trial and error to get this correct:
I kept getting Access denied when I would finally get it to push, and found that after so much idle time it was actually de-authenticating my session, and I kept just not working because I was not able to run “myubuntu” Container without getting logged directly into the Bash Prompt until I used “docker run -it -d myubuntu” so I think it might have been the “-d” that stopped it from pushing me right into the Ubuntu Container.
Some important things that tripped me up forever working with this:
- If you see Access Denied errors when pushing, do “docker login” again
- When doing a commit, when in doubt, using the Container ID #
- Use all lower case and probably just a # for the tag at the end
- Do not define the repository name, just your username, unlike GitHub Repo’s
My material from my class oddly enough demonstrates it in a way that I cannot get to work for the life of me, I’m probably just tried, but it is now out on my Docker Repo:
I got errors every time I used anything other than lower-case so I don’t think that will work for this, but I am sure I will do a deeper dive, and also another point I think is worth bolding is a container does NOT need to be running for it to be “pushed” as you can get the info with “docker ps -a” to see offline Containers!
Also when running the container, just like you can give it a port mapping with -P 8443:443, you can use the optional modifier –name in the command to assign a static name which I didn’t bother doing, and I am so close to done I won’t at this time.
Also a very quick demo of docker pull
This is very easy, you browse the docker hub repository via explore (which is how I found couchbase) which is shown here below as just a very simple operation:
I wanted to capture this odd “manifest unknown” error that I got trying to pull from a public / personal repo, then I just searched for a well known one and found couchbase by simply search Docker Hub to find the pull command needed and there it is in my images!
I’m calling it here, however I will play with this and go more in depth at a later time
This huge read is maybe half of this weeks modules, at this point I can’t imagine actually pulling these notes apart for different articles from my 8 week class, though I plan to re-cover all topics that I need to brush up on after I make cross the finish line of this class.
I hope this helps, and will be back soon with some CI / CD and Securing Applications and communications which will be some good stuff.
With that, Happy Friday, until next time! 🙂