Ralf Frankemölle Ralf Frankemölle

Wind of change

Exploring new opportunities

It’s been another 8 months since my last post. During this time, Broadcom acquired VMware, leading to significant changes within the company.

With that going on, my daily business was basically non-existent, and I explored different topics. I dabbled with AWS and GCP, refreshed some Python skills, worked on the CS50X course (which I need to finish still) and looked for a change in career. After a while, a former colleague sent me a job posting from Nutanix. I always liked their products but didn’t get along with some of their older marketing campaigns. However, much time had passed, and I really haven’t checked in with what they were doing in the storage space for a while.

So, I look at the position for the NUS Solution Architect in Central EMEA and decide to dive into what NUS does.
After having a brief intro, I was sold on the idea of managing all my storage needs from the Nutanix platform.

Long story short, I apply, go through some interviews and get accepted as the Portfolio Solution Architect for Nutanix Unified Storage in Central EMEA.
I started this role in May and have written about it on LinkedIn here:

Nutanix Unified Storage

The change in career addressed, let’s look at what Nutanix Unified Storage does briefly. In a nutshell, it combines Nutanix Volumes, Nutanix Files and Nutanix Objects in a single subscription. Customers have a per-TiB subscription across their whole Nutanix install base, not limited to a cluster / site. So, I buy an aggregate capacity for all AOS clusters and use the storage wherever I need it, instead of assigning it fixed to a single cluster.

The function that customers use also does not matter. Whether you provide SMB or NFS file shares, S3 compatible object storage or volumes via iSCSI, we always count the NUS subscriptions against the TiBs used. From a high-level perspective, NUS is visualized like this:

Nutanix Unified Storage (NUS) overview

The NUS components sit on top of AOS and the hypervisor (be that Nutanix AHV or vSphere). For files we deploy managed VMs on the underlying Hypervisor and push the file services bits there. For objects we use our “Microservices Platform” which you can think of as a managed Kubernetes environment, solely to run our object storage components.

In any case, we ultimately send the IO for our data services to the Nutanix Controller VM, which handles the IO the physical layer. I won’t explain the details of the CVM, since we don’t bother much with it on the NUS layer, but there are many articles on the inner workings of it.

Apart from the data services itself (file, block, object), we can see loads of other additional goodness in NUS:

Overview of additional data services in NUS

That’s it for today – I will probably pick different services and explain them each on their own.

 Thank you for checking in after a while!

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMware (by Broadcom) Explore US Network & Security Sessions 2024

VMware (by Broadcom) Explore will taken place in Las Vegas from 26th to 29th of August 2024. VMware Explore EMEA in Barcelona is from 5th of November to 7th of November 2024. It will be the first conference under the flagship from Broadcom. For this reason there will be additional sessions from other Broadcom business groups like Symnatec, Brocade, DX NetOps, etc. I provide recommendations within this blog post about some technical sessions related to Network & Security topics for the Explore event in US. I have excluded certifications and Hands-on-Labs and Meet the Expert Roundtable sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

A full event pass for VMware Explore costs $2,395 for the US event. If you book it before the 15th of July you get the pass $2,195. The full event pass has following advantages:

Full Event passes provide the following benefits:

  • Four days of sessions including the general session, breakout sessions, roundtables and more, with content tailored to both the business and technical audiences

  • Destinations, lounges and activities such as The Expo and Hands-on Labs 

  • Focused programming for SpringOne

  • Admittance to official Explore evening events including The Party and events in The Expo

  • Exclusive Explore swag

  • Attendee meals

Your full event pass registration allows you to purchase VMware Certified Professional (VCP) and VMware Certified Advanced Professional (VCAP) certification exam vouchers at a 50 percent discount (exams must be taken onsite during Explore Las Vegas).

VMware Explore Session Recommendations

Now I come to my session recommendations which are based on my experience and some very good known speakers from the last years and about topics which I am interested from Network and Security point of view. But first I have to say that every VMware Explore session is worth to join and customers, partners and VMware employees have taken much efforts to prepare some very good content. For me the VMware Explore sessions are the most important source to get technical updates, innovation and training. All sessions can be also watched after VMware Explore. Some hints to the session ID`s, the letter in bracket like VCFB1499LV stands for VCF = VMware Cloud Foudation (Business Unit) and B = Breakout Session. LV indicated that it is a session in Las Vegas. Sometimes you see also an letter D behind LV, this means that it is not a in person session, D stands for distributed. Please take into account that VMware by Broadcom has new business units:

  • VMware Cloud Foundation - VCF

  • Application Networking & Security - ANS

  • Modern Application - TNZ

  • Software-Defined Edge - SDE

And there are also “legacy” Broadcom business units like Agile Operation Division (AOD) or Enterprise Security Group (ESG) which includes Symantec.

Advanced Network & Security (ANS) Solution Key Note

NSX Sessions - Infrastructure related

Security Sessions

VMwarew AVI Load Balancer related

Network & Security Cloud Sessions

NSX Sessions - Container related

Network Monitoring related

DPU (SMARTNICS)

Symantec Sessions

SD-WAN and SASE

Summary

Please take into account that there are a lot of other interesting VMware by Broadcom Explore sessions, also for many other topics like AI, cloud, Edge, Container, vSphere, etc.

Feel free to add comments below if you see other mandatory sessions in the Network & Security area. I wish you a lot of Fun at VMware by Broadcom Explore 2024 in Las Vegas!

Read More
Joerg Roesch Joerg Roesch

How DPU`s accelerate VMware ESXi with NSX - a deeper look to the data path!

With vSphere 8 and NSX 4, VMware has introduced support for DPU`s (Data Process Units), see my blog post How NSX and SmartNICs (DPUs) accelerates the ESXi Hypervisor! as a introduction for this topic. DPU`s are more known from SmartNICS, but there is a slight difference between DPU`s and SmartNICs. DPUs and SmartNICs serve to accelerate and offload tasks in data centre environments. DPUs are more versatile and capable of handling a broader range of data-related workloads, including networking, storage, and security tasks. SmartNICs are more specialised and primarily focus on optimising network-related functions. The choice between the two depends on the specific needs and use cases of the data centre or cloud infrastructure. DPU`s running its own operation system (OS) and is completely managed independently. SmartNICs are integrated and managed from the operation system (OS) running on the CPU.

VMware is using DPU`s with ARM processor. The DPU support with vSphere 8 and NSX 4 is declared from VMware as Distributed Service Engine (DSE). NVIDIA and AMD Pensando currently supporting the DPU with vSphere and NSX. Dell EMC and HPE are supporting the solution from server vendor side. There are other NIC and server vendors on the roadmap. VMware also have plans to support vSAN and Baremetal for DPU`s in the future.

The DPU architecture accelerates the networking and security function in the modern "Software Defined Data Center". NSX networking and security services are offloaded and accelerated to the DPU. DPUs also provide enhanced visibility to show network communications. This helps with troubleshooting, mitigation against hacking attacks and compliance requirements. It enables VMware customers to run NSX services such as routing, switching, firewalling and monitoring directly on the DPU. This is particularly interesting for users who have significant demands in terms of high throughput, low latency and increased security standards. 

Due to offloading network and security services to the DPU the x86 frees up compute resources on the host for the applications. As a result, more workloads can be deployed on fewer servers - without compromising the monitoring, manageability and security features offered by vSphere and NSX. DPUs reduce the computational tasks of the main processors, thereby reducing energy consumption and the associated CO2 emissions. In addition, because DPUs distribute power and efficiency across fewer servers, the number of hardware components needed is reduced. This reduces waste and protects the environment.

With DPU`s, the NSX services (routing, switching, firewalling, monitoring) are outsourced from the hypervisor to the DPU (Data Process Unit), see figure 1. An additional modified and specified ESXi image is installed on the DPU for this purpose. The new architecture runs the infrastructure services on the DPU, providing the necessary separation between the application workloads running on the x86 computing platform and the infrastructure services. This is of enormous advantage for customers with high security and compliance requirements. Regulatory authorities such as the BSI (German Federal Office for Information Security) in particular often require separations of productive and management traffic for certain environments. 

Figure 1: x86 and DPU (Data Process Unit) architecture

Data-Path Model Evolution from VDS over SR-IOV/EDP to DPU

Before I want to describe the data-path model options of a DPU I want to show how is it currently running with a standard VDS (vSphere Distributed Switch). Afterwards I will have a look to the VMware performance data path models SR-IOV (Single-Root Input/Output Virtualization) and EDP (Enhanced Data Path) which has been designed for performance requirements before DPU. And finally I will come to the DPU data path options VMDirectPath (UPTv2) and Emulated Mode which brings the acceleration in hardware.

VDS Standard Datapath

In figure 2 there is the standard datapath for a VDS visible, it does not matter if a N-VDS or a VDS is in use, it is the same principle. When a packet arrives at the network card of the ESXi server, a short interruption causes a context switch at the CPU. After the routing and firewall rules have been verified in the slow path, the packet will be forwarded. The same process takes place at the VM level, the CPU is also loaded and a context change is brought about. This causes problems especially for applications with a high packet rate.

Figure 2: Standard VDS datapath

 

Data path models: SR-IOV or EDP

Even before DPUs, VMware introduced the SR-IOV (Single-Root Input/Output Virtualization) and EDP (Enhanced Data Path) data path models (see figure 3) to provide techniques for workloads with high performance requirements. SR-IOV bypasses the virtual switch completely. Traffic is passed directly from the physical network card to the virtual machine. The "Physical Function" (PF) and the "Virtual Function" (VF) map the communication from the physical to the VM. Since there is no virtual switch in the data path, the CPU is not loaded and there is no additional latency. There is a one-to-one relationship between a VF and a VM.

The number of Virtual Functions depends on the network card. SR-IOV must be supported by the PF driver, the ESXi host, the VF driver and the virtual machine operating system. As a virtual machine driver, SR-IOV relies on vendor-specific PMD (Poll Mode Driver) drivers to access the network adapter directly. The disadvantage of SR-IOV is that the hardware dependency means that the HA tools of vSphere such as vMotion1 or DRS2 (Distributed Resource Scheduler) are not supported by VMware.

Figure 3: Performace Data Path models SR-IOV and EDP

A second data path model for improving performance is Enhanced Data Path (EDP). EDP is an NSX-specific function. Dedicated CPU resources are reserved by the cores on the hypervisor for the "process forwarding" of the data packets. When a packet arrives at the ESXi server, a copy is sent to the fast path and the flow cache is checked. If the forwarding information and, in the case of an active firewall, the so-called five-tuples (source IP address, destination IP address, source port, destination port, protocol) are successfully verified, the packet is forwarded to the virtual machine. The flow cache is located at a dedicated storage location and is constantly polled by the CPU. If there are no control layer functions in the flow cache, the network and security configuration of the NSX Manager is verified in the so-called "slow path" in order to send the data packet to the respective destination. The slow path then sends an update to the fast path so that in future the packets are processed directly in the flow cache.

In the slow path, a processor-side load is placed on the hypervisor. The VMXNET3 PMD driver is used on the virtual machine. The clear advantage of this method: With EDP, the vSphere high availability models such as vMotion or DRS are still available. 

Data path models for a DPU

DPUs combine the advantages of SR-IOV (Single Root I/O Virtualisation) and EDP (Enhanced Data Path) and map them architecturally (see figure 4). The DPU contains the hardware accelerator component for fast packet forwarding and reserves dedicated CPU resources for packet processing in the fast path.

Figure 4: Performance Data Path models VMDirectPath (UPTv2) and Emulated Mode with a DPU

Thus, the DPU converts packet processing, which is otherwise implemented in software, into hardware pipelines and the processing of NSX software packets moves from the server to the DPU. This in turn conserves the server's CPU consumption and frees up cache and memory resources that are shared with the VM and container workloads.

VMs can use Passthrough Virtual Functions and use the NSX functions. The hardware packet processing pipeline as well as the embedded processors implement the NSX datapath functionalities for this traffic.

The DPU architecture combines the advantages of Passthrough and the current NSX Enhanced Data Path with the VMXNET3 drivers. A dedicated VMDirectPath VF module implements the new UPT (Uniform Passthrough) architecture on the ESXi hypervisor. Virtual Functions based on VMDirectPath (UPTv2) then represent the virtualised instances of the physical network adapter. The VMDirectPath (UPTv"2) can be activated on the vCenter over a checkmark on the VM level.

If emulated mode (default mode) is used, traffic runs through a distributed MUX switch on the ESXi hypervisor. Besides the acceleration in hardware the packet forwording will be processed in software (Fast Path/Slow Path) in case of a HW table miss.

SmartNICs have the advantage that virtual machines can operate in pass-through mode while functionalities such as vSphere vMotion remain intact. In addition, there are no dependencies on the hardware for the VMs' guest drivers as with SR-IOV.

Please check out following YouTube video from my colleague Meghana Badrinath for a data path deep dive DPU-based Acceleration for NSX: Deep Dive

Summary:

Through DPUs with NSX 4 and vSphere 8, VMware improves speed at the hypervisor while taking into account the current network and security requirements of modern applications. Especially in times of increased security requirements due to ransomware and other potential attacks, this is an enormous advantage and the physical isolation of the workload and infrastructure domains as well. Purchases of new dedicated hardware in the form of additional DPU network cards with their own ARM processors must be taken into account and should be considered accordingly in future architecture planning. These investments are offset by savings in energy costs and minimize the total number of servers.

Read More
Ralf Frankemölle Ralf Frankemölle

VMware Cloud on AWS - Part 2 - Automated Deployment with Terraform

Welcome back to my next post on Securefever! If you missed our manual deployment guide for VMC on AWS, you can catch up here:
https://www.securefever.com/vmc-aws-manual-deployment

It took a while for the second post because I’ve had the great chance to write a few articles for the renowned German IT magazine “IT Administrator”, develop a new VMware exam, followed by my Take 3 at VMware.

What is “Take 3”? Great question!
The "Take 3" program allows eligible employees to take a break from their current role and explore a different role or project within the company for a duration of roughly three months. The intention behind the program is to foster a culture of continuous learning, enable career development, and encourage cross-functional collaboration within the company.

Read more on that in the dedicated blog here:
https://www.securefever.com/blog/off-topic-take-3

As promised in the previous blog, this post focuses on automating our VMC on AWS deployment using Terraform. Why? Because automation is king in the cloud! It promotes repeatability, scalability, and defining our infrastructure as code, ensures consistency while reducing human error.

I am very much still learning Terraform, so let me know about suggestions to improve the code, structure, etc. Also, since this blog was in the making for quite a while, some screenshots might show outdated versions of providers.

Let's get started!

Why Terraform?

Terraform is a popular Infrastructure as Code (IaC) tool, which allows users to define and provision infrastructure using a declarative configuration language (HCL, Hashicorp Configuration Language). Its interoperability with both AWS and VMC makes it an ideal choice. 

Prerequisites

- Familiarity with Terraform. If you're new, check out Terraform's Getting Started Guide (https://learn.hashicorp.com/terraform/getting-started/install.html).
- Terraform installed on your machine.
- Necessary permissions and credentials set up for both AWS and VMC.

Steps to Deploy VMC on AWS using Terraform

1. Setting Up Terraform

I am using MacOS, so I used brew to install terraform on my machine:

brew install terraform

Once that is done, we can check the installed terraform version:

“terraform version” output

2. Terraform Files

For the deployment I have set up multiple files.
Splitting terraform deployments into multiple files makes it easier to troubleshoot and also, I wanted to start early with good habits.

Next to main.tf I created “provider.tf”, “variables.tf”, “version.tf”, “sddc.tf” and “vmc_vpc.tf” and “terraform.tfvars” in my project.
Let’s review the different files and what they are used for.

main.tf and module sources

the main.tf file

The main.tf file is the primary entry point for Terraform configurations. It contains the core resources and infrastructure definitions that Terraform will manage. In big projects, main.tf might be split into multiple files. For our use case the single main.tf file is fine.

The first block defines where the Terraform state file will be stored. In our case, it’s on the “local backend”. That means the state will be kept in my local file system, under “../../phase1.tfstate”.

The second block contains a “module” definition. A module in Terraform is like a reusable function for infrastructure code. It groups related resources to better organize our code. Each module has a source showing where its content is found (source = “..”/vmc_vpc”).
This module is used to configure the AWS VPC that we use for our VMC on AWS deployments.
It holds configuration for the region, VPC CIDR and subnet CIDRs, all based on variables found in the module source file:

vmc_vpc.tf - the file configuring the AWS VPC module

In this file we define the resources that will be configured in main.tf.
It starts with some variable definitions. That way we can re-use the structure for different deployment by changing the contents of “variables.tf”.

Next, we define an “aws_vpc” and three “aws_subnet” resources.
These reference our connected VPC and the connected VPC subnets.

The module source file pulls the relevant content from “variables.tf”:

image of variables.tf - this is where the module files get their variables content

In the last block we create another module, but this time for the VMC SDDC configuration. The logic remains the same, though this time the variable contents are found in the source “../sddc”:

the module for the sddc creation

This module source also pulls content from variables.tf and configures a few more options. As this is a demo environment again, we are using the single node i3 deployment. That is why we have “host_instance_type” set to “I3_METAL” and the “sddc_type” to “1NODE”.

provider.tf

In provider.tf we specify and configure the Terraform providers we use in our project (AWS, VMC). It centralizes provider configuration, to make our Terraform project more organized.

content of provider.tf

versions.tf

The `versions.tf` file in a Terraform deployment is typically used to specify the versions of Terraform itself and the providers. This ensures consistency and compatibility across different machines – who doesn’t love a good old “works on my machine” error. Our “versions.tf” looks like this:

In the future this will be updated to constrain the allowed versions of the used providers.

Tokens, Authentication, Privileges

We have seen all the different configurations that we have Terraform do. However, we did not specify credentials or similar anywhere. So how do we actually “log in” and create these resources? The last file we need to look at is “terraform.tfvars”. This file contains all the secrets and token information that Terraform uses to operate with the configured providers. I do not recommend storing them as plain text on your machine, but I have not yet explored secrets management any further.
So, for this time, this must be good enough. This is sample content of “terraform.tfvars” in my current project:

example output of terraform.tfvars - the file I currently store secrets in

3. Initializing Terraform

Now that we have explored the different parts of our Terraform deployment, we can initialize Terraform with running “terraform init” in the folder with the config files:

rfrankemolle@rfrankemolFV7TR main % terraform init

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Reusing previous version of terraform-providers/vmc from the dependency lock file
- Reusing previous version of vmware/nsxt from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed vmware/nsxt v3.3.0
- Using previously-installed hashicorp/aws v4.58.0
- Using previously-installed terraform-providers/vmc v1.13.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

The command is used to perform several tasks like install provider plugins, setting up modules, initialise the backend (the location where state files are stored) and set up the working directory to run other commands like “plan” or “apply”.

4. Plan and Apply

After initializing, we run “terraform plan”. Terraform will output what it is going to deploy / change in the infrastructure:

rfrankemolle@rfrankemolFV7TR main % terraform plan
module.vmc_vpc.data.aws_availability_zones.az: Reading...
module.vmc_vpc.data.aws_availability_zones.az: Read complete after 0s [id=eu-west-2]
module.sddc.data.vmc_connected_accounts.my_accounts: Reading...
module.sddc.data.vmc_connected_accounts.my_accounts: Read complete after 0s [id=42686b6b-163d-3465-a953-09b3da081d31]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.sddc.vmc_sddc.vmc_sddc1 will be created
  + resource "vmc_sddc" "vmc_sddc1" {
      + account_link_state       = (known after apply)
      + availability_zones       = (known after apply)
      + cloud_password           = (known after apply)
      + cloud_username           = (known after apply)
      + cluster_id               = (known after apply)
      + cluster_info             = (known after apply)
      + created                  = (known after apply)
      + delay_account_link       = false
      + deployment_type          = "SingleAZ"
      + edrs_policy_type         = (known after apply)
      + enable_edrs              = (known after apply)
      + host_instance_type       = "I3_METAL"
      + id                       = (known after apply)
      + intranet_mtu_uplink      = 1500
      + max_hosts                = (known after apply)
      + min_hosts                = (known after apply)
      + nsxt_cloudadmin          = (known after apply)
      + nsxt_cloudadmin_password = (known after apply)
      + nsxt_cloudaudit          = (known after apply)
      + nsxt_cloudaudit_password = (known after apply)
      + nsxt_private_ip          = (known after apply)
      + nsxt_private_url         = (known after apply)
      + nsxt_reverse_proxy_url   = (known after apply)
      + nsxt_ui                  = (known after apply)
      + num_host                 = 1
      + org_id                   = (known after apply)
      + provider_type            = "AWS"
      + region                   = "EU_WEST_2"
      + sddc_access_state        = (known after apply)
      + sddc_name                = "rfrankemolle_tf_test"
      + sddc_size                = (known after apply)
      + sddc_state               = (known after apply)
      + sddc_type                = "1NODE"
      + size                     = "medium"
      + skip_creating_vxlan      = false
      + sso_domain               = "vmc.local"
      + updated                  = (known after apply)
      + updated_by_user_id       = (known after apply)
      + updated_by_user_name     = (known after apply)
      + user_id                  = (known after apply)
      + user_name                = (known after apply)
      + vc_url                   = (known after apply)
      + version                  = (known after apply)
      + vpc_cidr                 = "10.20.0.0/16"
      + vxlan_subnet             = "10.100.100.0/24"

      + account_link_sddc_config {
          + connected_account_id = "42686b6b-163d-3465-a953-09b3da081d31"
          + customer_subnet_ids  = (known after apply)
        }

      + timeouts {
          + create = "300m"
          + delete = "180m"
          + update = "300m"
        }
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet1 will be created
  + resource "aws_subnet" "con_vpc_subnet1" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.64/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet1"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet2 will be created
  + resource "aws_subnet" "con_vpc_subnet2" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.128/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet2"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet2"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet3 will be created
  + resource "aws_subnet" "con_vpc_subnet3" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2c"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.192/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet3"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet3"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_vpc.con_vpc will be created
  + resource "aws_vpc" "con_vpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.10.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "rf_connected_vpc"
        }
      + tags_all                             = {
          + "Name" = "rf_connected_vpc"
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

If we are happy with the output, we run “terraform apply”. This will start the deployment by terraform:

rfrankemolle@rfrankemolFV7TR main % terraform apply
module.vmc_vpc.data.aws_availability_zones.az: Reading...
module.vmc_vpc.data.aws_availability_zones.az: Read complete after 1s [id=eu-west-2]
module.sddc.data.vmc_connected_accounts.my_accounts: Reading...
module.sddc.data.vmc_connected_accounts.my_accounts: Read complete after 1s [id=42686b6b-163d-3465-a953-09b3da081d31]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.sddc.vmc_sddc.vmc_sddc1 will be created
  + resource "vmc_sddc" "vmc_sddc1" {
      + account_link_state       = (known after apply)
      + availability_zones       = (known after apply)
      + cloud_password           = (known after apply)
      + cloud_username           = (known after apply)
      + cluster_id               = (known after apply)
      + cluster_info             = (known after apply)
      + created                  = (known after apply)
      + delay_account_link       = false
      + deployment_type          = "SingleAZ"
      + edrs_policy_type         = (known after apply)
      + enable_edrs              = (known after apply)
      + host_instance_type       = "I3_METAL"
      + id                       = (known after apply)
      + intranet_mtu_uplink      = 1500
      + max_hosts                = (known after apply)
      + min_hosts                = (known after apply)
      + nsxt_cloudadmin          = (known after apply)
      + nsxt_cloudadmin_password = (known after apply)
      + nsxt_cloudaudit          = (known after apply)
      + nsxt_cloudaudit_password = (known after apply)
      + nsxt_private_ip          = (known after apply)
      + nsxt_private_url         = (known after apply)
      + nsxt_reverse_proxy_url   = (known after apply)
      + nsxt_ui                  = (known after apply)
      + num_host                 = 1
      + org_id                   = (known after apply)
      + provider_type            = "AWS"
      + region                   = "EU_WEST_2"
      + sddc_access_state        = (known after apply)
      + sddc_name                = "rfrankemolle_tf_test"
      + sddc_size                = (known after apply)
      + sddc_state               = (known after apply)
      + sddc_type                = "1NODE"
      + size                     = "medium"
      + skip_creating_vxlan      = false
      + sso_domain               = "vmc.local"
      + updated                  = (known after apply)
      + updated_by_user_id       = (known after apply)
      + updated_by_user_name     = (known after apply)
      + user_id                  = (known after apply)
      + user_name                = (known after apply)
      + vc_url                   = (known after apply)
      + version                  = (known after apply)
      + vpc_cidr                 = "10.20.0.0/16"
      + vxlan_subnet             = "10.100.100.0/24"

      + account_link_sddc_config {
          + connected_account_id = "42686b6b-163d-3465-a953-09b3da081d31"
          + customer_subnet_ids  = (known after apply)
        }

      + timeouts {
          + create = "300m"
          + delete = "180m"
          + update = "300m"
        }
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet1 will be created
  + resource "aws_subnet" "con_vpc_subnet1" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.64/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet1"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet2 will be created
  + resource "aws_subnet" "con_vpc_subnet2" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.128/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet2"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet2"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_subnet.con_vpc_subnet3 will be created
  + resource "aws_subnet" "con_vpc_subnet3" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "eu-west-2c"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.10.0.192/26"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name" = "rf_connected_vpc_subnet3"
        }
      + tags_all                                       = {
          + "Name" = "rf_connected_vpc_subnet3"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vmc_vpc.aws_vpc.con_vpc will be created
  + resource "aws_vpc" "con_vpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.10.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "rf_connected_vpc"
        }
      + tags_all                             = {
          + "Name" = "rf_connected_vpc"
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Accept to continue by typing “yes” and watch the magic happen!

5. Patience is a virtue

The process will take a while, as Terraform provisions resources in both AWS and VMware Cloud. Once completed, Terraform will provide an output with a summary.

module.sddc.vmc_sddc.vmc_sddc1: Still creating... [1h46m41s elapsed]
module.sddc.vmc_sddc.vmc_sddc1: Creation complete after 1h46m44s [id=d87b5f5b-6299-4627-839a-c0d83aa57167]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Benefits of Using Terraform for VMC on AWS Deployment

 Let’s quickly review the benefits of using Terraform (and IaC in general) for deployments in the cloud.

  1. Consistency: No manual errors. Your infrastructure is now version-controlled and can be deployed repeatedly with the same configurations.

  2. Scalability: Need to deploy multiple SDDCs? Just tweak your Terraform scripts.

  3. Transparency: Your entire infrastructure is visible as code. This is great for team collaboration, auditing and makes for easier configuration reviews.

Conclusion

Automating VMC on AWS deployment using Terraform does not necessarily speed up the process but ensures our infrastructure is consistent and scalable. As we evolve our cloud strategy, using tools like Terraform will be key to staying on top of things.

In my next blogs, I will likely delve deeper into different aspects of VMware Cloud. Stay tuned, and feel free to leave any comments or questions below! 

Links: 

https://www.securefever.com/vmc-aws-manual-deployment
https://learn.hashicorp.com/terraform/getting-started/install.html
https://registry.terraform.io/providers/vmware/vmc/latest/docs

Read More
Ralf Frankemölle Ralf Frankemölle

Off topic: Take 3

Since promising a blog series on VMC on AWS and posting the first blog, around 6 months have passed.
Part of the reason was my recent Take 3 which I will be talking about in this post.

A Closer Look at VMware's "Take 3" Program

Apart from VMware’s top-tier virtualization products, it has a cool approach to employee growth. One standout initiative is their “Take 3” program. Let’s dive in!

What’s “Take 3” and what did I do?

In a nutshell, “Take 3” lets VMware employees switch things up. Eligible employees can take a break from their current role for up to three months and try something totally different within the company.
It’s a change of scene without leaving the building!

Almost 2 years ago I switched to the VMware Cloud Solution Architect role. This is very much a presales role and offers great variety in what I do all the time. However, after a while I wanted to see what Product Management looks like at VMware. Since I do not have Product Management experience and did not want to leave my role altogether, Take 3 came in very handy.

All it took was a quick informal application for a Take 3 opening and some coordination with managers and peers. Shortly after, I started my Take 3 as a Product Manager for VMware Cloud on Equinix Metal (VMC-E).

For those unfamiliar, VMC-E was announced at VMware Explore Barcelona last year:
https://blogs.vmware.com/cloud/2022/11/07/introducing-vmware-cloud-on-equinix-metal/
To make it short (and partially oversimplify): it brings all the fun of managed VMware SDDCs from platforms like AWS to the Equinix Metal platform.

Anyway, back to topic:
In this role, I was now working on different features for VMC-E.,
I was not primarily in front of customers anymore, but rather working in the background.

Working through field feedback, deriving technical requirements from conversations with customers and peers, creating write-ups for PM and engineering teams - undoubtedly a switch from my daily tasks as a Solution Architect.

My time as a PM is coming to an end, but I can honestly say that it was a rewarding experience.
From prioritizing tasks within my own control, over negotiating feature timelines with other PM teams, to discussing possible solutions to challenges on a technical level, it was fun!

My top 3 reasons why Take 3 is awesome

The program is fantastic, and I can probably spend hours talking about it.
To save all of us some time, I broke it down to my top 3 reasons:

1. New Perspectives:
Part of the reason I wanted to see Product Management is the tendency to “complain” about the product team, if you’re customer facing in the field constantly.
It is always that one feature, that one function that is missing, or not working as expected.
This sparked my interest in how hard creating / managing a product is.
Now, close to the end of my Take 3 I can honestly say: it is hard.

2. New / Improved Skills:
I could not rely on my technical experience as much as I had hoped. The single most important thing I found was clear communication – which sometimes involves explaining things to people much less technically versed. While this also happens in presales frequently, it was definitely different from what I am used to.
Apart from that, being inexperienced in the PM role, I tried to please everyone at once. Getting feedback from my (temporarily former) peers, I tried to work them all into our plan for the next weeks and months. Not only was this basically impossible to deliver upon, but it also revealed the next important skill to me: prioritization.
In a nutshell, prioritizing isn't just about ticking boxes—it's about making smart choices that align with our goals. So, next time you're swamped, take a step back, prioritize, and watch the magic happen!

3. Happy Employee(s):
Let’s face it, doing the “same old” all the time can get boring. Take 3 is like hitting a refresh button.
And now that things are coming to an end, I am excited to get back into my former role.

Wrapping it up

Technology moves fast, interests shift, and VMware acknowledges this. The Take 3 program is about staying fresh, learning more, and keeping things exciting. It’s a win for both parties and honestly another reason why VMware is a great place to work.

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMware Explore EMEA Network & Security Sessions 2023

VMware Explore will taken place in Barcelona from 6th to 9th of November 2023. I provide recommendations within this blog post about some technical sessions related to Network & Security topics for the Explore event in Europe. I have excluded certifications and Hands-on-Labs sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

A full event pass for VMware Explore costs $1,575 for the EMEA event. The full event pass has following advantages:

Full Event passes provide the following benefits:

  • Four days of sessions including the general session, solution keynotes, breakout sessions, roundtables and more, with content tailored to both the business and technical audience

  • Destinations, lounges and activities such as The Expo and VMware Hands-on Labs 

  • Focused programming for SpringOne, Partner* and TAM* audiences (These programs will have restricted access.)

  • Admittance to official VMware Explore evening events: Welcome Reception, Hall Crawl and The Party

  • Exclusive VMware Explore swag

  • Attendee meals (Tuesday through Thursday)

Your full event pass purchase also allows you to add on VMware Certified Professional (VCP) and VMware Certified Advanced Professional (VCAP) certification exam vouchers during registration at a 50 percent discount (exams must be taken onsite during VMware Explore Las Vegas).

VMware Explore Session Recommendations

Now I come to my session recommendations which are based on my experience and some very good known speakers from the last years and about topics which I am interested from Network and Security point of view. But first I have to say that every VMware Explore session is worth to join and customers, partners and VMware employees have taken much efforts to prepare some very good content. I am also very proud that I deliver the first time a breakout session myself with my customer BWI and Simer Singh from DPU Engineering [VIB1815BCN]. Thus you will find this session on my recommendation list as well:-)

For me the VMware Explore sessions are the most important source to get technical updates, innovation and training. All sessions can be also watched after VMware Explore. Some hints to the session ID`s, the letter in bracket like NSCB2088LV stands for NSC = Network & Security and B = Breakout Session. BCN indicated that it is a Barcelona session. Sometimes you see also an letter D behind BCN, this means that it is not a in person session, D stands for distributed.

Network & Security Solution Key Note

Security Sessions

NSX Sessions - Infrastructure related

NSX Sessions - Operation and Monitoring related

DPU (SmartNICs)

NSX Sessions - Advanced Load Balancer (AVI) related

SD-WAN and SASE

NSX Customer Stories

Summary

There are a lot interesting VMware Explore sessions, also for many other topics like AI, Multicloud, Edge, Container, End User Computing, vSphere, etc.

Feel free to add comments below if you see other mandatory sessions within the Network & Security area. I wish you a lot of Fun at VMware Explore 2023 and looking forward to see you in person!

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMware Explore US Network & Security Sessions 2023

VMware Explore will taken place in Las Vegas from 21th to 24th of August 2023. VMware Explore EMEA in Barcelona is from 6th of November to 9th of November 2023. I provide recommendations within this blog post about some technical sessions related to Network & Security topics for the Explore event in US. I have excluded certifications and Hands-on-Labs sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

A full event pass for VMware Explore costs $2,295 for the US event. The full event pass has following advantages:

Full Event passes provide the following benefits:

  • Four days of sessions including the general session, solution keynotes, breakout sessions, roundtables and more, with content tailored to both the business and technical audience

  • Destinations, lounges and activities such as The Expo and VMware Hands-on Labs 

  • Focused programming for SpringOne, Partner* and TAM* audiences (These programs will have restricted access.)

  • Admittance to official VMware Explore evening events: Welcome Reception, Hall Crawl and The Party

  • Exclusive VMware Explore swag

  • Attendee meals (Tuesday through Thursday)

Your full event pass purchase also allows you to add on VMware Certified Professional (VCP) and VMware Certified Advanced Professional (VCAP) certification exam vouchers during registration at a 50 percent discount (exams must be taken onsite during VMware Explore Las Vegas).

VMware Explore Session Recommendations

Now I come to my session recommendations which are based on my experience and some very good known speakers from the last years and about topics which I am interested from Network and Security point of view. But first I have to say that every VMware Explore session is worth to join and customers, partners and VMware employees have taken much efforts to prepare some very good content. For me the VMware Explore sessions are the most important source to get technical updates, innovation and training. All sessions can be also watched after VMware Explore. Some hints to the session ID`s, the letter in bracket like NSCB2088LV stands for NSC = Network & Security and B = Breakout Session. LV indicated that it is in Las Vegas session. Sometimes you see also an letter D behind LV, this means that it is not a in person session, D stands for distributed.

Network & Security Solution Key Note

Network & Security Multi Cloud Sessions

NSX Sessions - Container related

Security Sessions

NSX Sessions - Infrastructure related

NSX Sessions - Operation and Monitoring related

NSX Sessions - Advanced Load Balancer (AVI) related

SD-WAN and SASE

DPU (SMARTNICS)

NSX Customer Stories

Summary

There are a lot interesting VMware Explore sessions, also for many other topics like AI, Multicloud, Edge, Container, End User Computing, vSphere, etc.

Feel free to add comments below if you see other mandatory sessions within the Network & Security area. I wish you a lot of Fun at VMware Explore 2023!

Read More
Ralf Frankemölle Ralf Frankemölle

VMware Cloud on AWS - Part 1 - Manual Deployment

This is my first blog on Securefever, and you can read my introduction here: 

https://www.securefever.com/about

In this blog post, we will cover the basics of a VMC on AWS deployment. 
The blog is meant to be accessible to everyone new to VMC. 

This blog series will cover requirements at both VMC and AWS levels. 
Any requirement listed in this blog will include examples created in our own AWS/VMware demo environment. 

My VMC environment is based on a 1-Node deployment. Keep in mind that this is not suitable for production workloads. 

The expected blog series will develop over the next couple of months. 

 Our initial deployment will be conducted manually. We will proceed to a DevOps approach in a follow-up exercise in which we will look at an automated deployment using Terraform. 

This blog series will also dive into additional services at a later stage, partially outside of the VMware/AWS ecosphere (external storage).  
For those interested, I will also follow up with a series on VMware Cloud on AWS as a disaster recovery solution.  
With that being said, let's jump right in! 

Requirements - AWS 

This series will include details and easy-to-follow instructions. However, I highly recommend acquaintance with the basics/concepts of AWS Virtual Private Clouds (VPC), Regions, and Availability Zones (AZ). 

Technical requirements:

  • AWS Account

  • A VPC (in the region we want to deploy the SDDC), along with at least one VPC subnet in the AZ where we want to deploy.  

Please note that like in all hyperscalers, you pay for what you use. With that in mind, setting up an account, VPCs, and Subnets are free of charge.
I encourage you to keep a cloud mindset and ensure that unused resources are powered off and that you delete all resources post the completion of your testing, as these can generate monthly charges. 

I have reserved the following networks for the lab’s AWS resources. Your VPC networks do not need to reflect my selection; however, it may help you to follow along.  

  • AWS VPC: 10.10.0.0/16 

  • VPC Subnet AZ1: 10.10.0.64/26 

  • VPC Subnet AZ2: 10.10.0.128/26 

  • VPC Subnet AZ3: 10.10.0.192/26 

This is just an example network that was free in our lab setup, and I use this for many different tests/resources within AWS.  
The VPC network scope does not need to be a /16.  

Why do I need an AWS account?  
The connected VPC has VPC subnets in one or each of the available availability zones within the AWS region. 
By selecting one of the VPC subnets in a specific AZ, we determine in which AZ we want the SDDC to be deployed.

Every production VMC on AWS deployment must be permitted to access a customer-owned VPC; this is referred to as a “Connected VPC” and it allows connectivity with AWS native services. The connected VPC enables customers to access services over a low-latency, high-bandwidth, AWS-managed connection. We automatically configure Elastic Network Interfaces for the above-mentioned low latency connectivity between resources in the VMC SDDC and any resources in AWS during the initial deployment. This step is optional only for environments that will be deleted prior to 60 days. Environments hosting workloads have to be connected to a shared VPC of your choice.  

 The ‘Connected VPC’ can be leveraged for use cases like hosting an application's database in RDS, adding load-balancing, accessing private S3 endpoints, or any of AWS' plethora of services. The ‘connected VPC’ also has the advantage of cost-free traffic to the VPC subnet in the AZ of the SDDC. This feature has the inherent benefit of lowering traffic charges, e.g. for backup costs in AWS. 

We will talk about additional use cases in a future blog post. 

Implementing VPC + VPC Subnets in the AWS Account 

To begin we will start by deploying the VPC in our preferred region (I am using London). Please note that VMC is not available in every region. Use the following link to find an eligible region near you: 
https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws.getting-started/GUID-19FB6A08-B1DA-4A6F-88A3-50ED445CFFCF.html 

Every VPC must include at least one subnet. I will deploy a subnet in each of the AZs. Creating subnets in all AZs simplifies the deployment and testing process should we need to route traffic into other AZs later. 

I repeated the process twice to create "rf_connected_vpc_subnet2" and "rf_connected_vpc_subnet3". 

My naming convention combines the letters “RF”, followed by a description.  
I encourage you to follow your organizations naming convention if you have one.  
If you are building your first cloud, please mind AWS’ naming convention guidelines: 
https://docs.aws.amazon.com/mediaconnect/latest/ug/tagging-restrictions.html  

Our efforts should deliver a VPC with three subnets.  
Now that the hard work is done, let’s proceed to the fun part, the SDDC deployment: 

Table of the newly created VPC subnets

 Requirements - VMware / VMC 

This section assumes you have a VMC organisation. 

VMC requires the following information / requirements: 

  • The AWS region we want to deploy in (the same region where we deployed the VPC) 

  • Deployment Type (Stretched Cluster or not) 

  • Host Type 

  • Number of Hosts 

  • AWS Account  (‘Shared VPC’ account) 

  • The AZ we want the SDDC to be deployed in (determined by selecting the VPC subnet in that AZ) 

  • Management Subnet (private IP range used to host the SDDC management components, like vCenter, NSX Manager, etc.)

AWS Region: 
For this exercise, I will deploy VMC in London.

Deployment Type: 
This lab will only contain a 1-Node SDDC. The "Single Host" deployment is a particular deployment only meant for POCs or short testing periods. 

(This lab will not demo a stretched cluster. The stretched cluster solutions are meant for businesses that require an SLA of 99,99. Please leave a comment or message me if you're interested in learning more about stretch clustering or VMC HA capabilities.) 

Host Type / Number of Hosts: 
1 x i3 instance. 
I am happy to announce the expanded instance size and offerings now include i3en, and i4n. Follow the link below for an overview of available instance types and their specs:
https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-98FD3BA9-8A1B-4500-99FB-C40DF6B3DA95.html 

 Please work with VMware and / or a partner to determine which host type makes sense for your organisation and workload requirements. 

AWS Account: 
Let's get started by connect an AWS account to your VMC organisation. 
In order to do this, you need to run an AWS Cloud Formation template with an administrative user of the AWS account. 

As we have already created the VPC and subnets in a specific account, we want to make sure we link the AWS account with these resources

AWS VPC + Subnet: 
After connecting to an AWS account we can select the VPC and VPC subnet. 
Remember that the VPC subnet determines the AZ in which your SDDC will be running. 

Management Subnet: 
The management subnet is where VMC will run the management components. 

For a production setup we recommend a /16 subnet from the private IP space defined in RFC 1918, but a minimum of /20 is required. Moreover, you can not choose the CIDRs 10.0.0.0/15 or 172.31.0.0/16, as these are reserved. 

Note that the size of the management subnet influences the scalability of your SDDC and can not be changed after deployment. For an in-depth explanation of the management subnets, have a look at this blog post: https://blogs.vmware.com/cloud/2019/10/03/selecting-ip-subnets-sddc/ 

The deployment automation expects a /16, /20 or /23 (if non-production). Other ranges will not be accepted (/22 for example). 

Putting it all together - deploying the SDDC 

Log in to the VMC console (vmc.vmware.com): 

  1. Click on “Inventory” 

  2. Click on “Create SDDC” 

Start VMC on AWS Deployment

Next, we configure the SDDC properties using the parameters we defined: 

  1. Name your SDDC 

  2. Select the region in which you created the ‘Connected VPC’ 

  3. Select the ‘Single Host’ or desired host count.  

  4. Select the host type 

  5. Please be advised that this configuration is not recommended for operations longer than 60 days 

  6. Click on ‘Next’ 

Provide required details for SDDC deployment

Please ensure you can access or have credentials to the AWS management console to connect the AWS ‘Connected VPC’ account. 

  1. Select "Connect to AWS now," "Connect to a new AWS account"  

  2. Press "OPEN AWS CONSOLE WITH CLOUDFORMATION TEMPLATE": 

Configure VMC account linking to AWS

This action will redirect to the AWS management console. Here we will execute the VMware-generated CloudFormation template: 

  1. Please click check the ‘I acknowledge that the AWS CloudFormation template might create IAM resources’  

  2. Press ‘Create Stack’ 

For more information on the permissions and or actions please visit the following link. There you will find VMware’s documented actions, roles used by the account linking, as well as the required permissions in the product documentation:  

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-DE8E80A3-5EED-474C-AECD-D30534926615.html  

If your template is successful the results should look as follows: 

We may now return to the VMC console and continue with the SDDC deployment: 

The CloudFormation template allows VMC to connect to the ‘Connected VPC’ account in AWS. Please select the appropriate VPC and subnet from the dropdown: 

  1. Select the VPC 

  2. Select the subnet 

  3. Click ‘Next’ 

vmc network selection

It is a good idea to perform a review and prior to making acknowledgements. If the choices look correct, we can now provide the management CIDR. 

I will use 10.20.0.0/16 as the management subnet. 
(If you do not provide any input, the deployment assumes the default of 10.2.0.0/16): 

  1. Provide you Management Subnet CIDR  

  2. Click ‘Next’ 

Provide SDDC management subnet

We are almost there. The following screen is advising us of costs and the start of the charges. Please ensure that you are ready to launch as costing starts as soon as we complete the “Deploy SDDC” process. 

  1. Click on “Charges start once your SDDC has finished deploying. Accrued charges will be billed at the end of the month.’ 

  2. Click on “Pricing is per hour-hour consumed for each host, from the time a host is launched until it is deleted.’ 

finish SDDC deployment

Completion takes around 100 - 120 minutes.  

With this, we conclude the first part of this blog series.
As you see, VMC might sound complicated first, but it quickly implemented with just a bit preparation. 

In the next post, we will get dirty with Terraform.

See you soon!

Read More
Joerg Roesch Joerg Roesch

How NSX and SmartNICs (DPUs) accelerates the ESXi Hypervisor!

With vSphere 8 and NSX 4, VMware has introduced support for SmartNICs. SmartNICs are usually referred as DPUs (Data Processing Units). VMware has declared the DPU solution as Distributed Service Engine (DSE) under vSphere 8. There are several different names for the same function. In my blog post, I will primarily use the names DPU (Data Process Unit). The DPU-based acceleration for NSX emerged from the "Monterey" project, an initiative that was started by VMware about 2 years ago and has been steadily developed further. 

The DPU architecture accelerates the networking and security function in the modern "Software Defined Data Center". NSX networking and security services are offloaded to DPUs to free up compute resources on the host. DPUs also provide enhanced visibility to show network communications. This helps with troubleshooting, mitigation against hacking attacks and compliance requirements. It enables VMware customers to run NSX services such as routing, switching, firewalling and monitoring directly on the DPU. This is particularly interesting for users who have significant demands in terms of high throughput, low latency and increased security standards. 

ESXi SmartNICs Architecture

VMware relies on the ARM processor in the DPU Distributed Services Engine (DSE) solution with vSphere 8 and NSX 4 (see Figure 1).

Figure 1: SmartNICs Architecture

 

There is a local flash memory on the card to roll out the ESXi software via a boot image. A stripped-down version of ESXi is installed on the DPU, which is optimised for I/O requirements, such as packet offloading, external management, etc. For network connectivity, there are two Ethernet ports with SFP (small form-factor pluggable) modules and one RJ-45 copper port for a management connection. Configuration management runs independently of the x86 server and has been greatly simplified for operation. The Programmable Accelerator maps the packet processing function in hardware and ensures that data traffic is offloaded to the DPU and accelerated. 
The High Speed Interconnect is the link between the Hardware Programmable Accelerator and the CPU, designed for low latency and high bandwidth.
Virtualised Device Functions (VDFs) enable network and storage devices to be provided as virtual devices. VDFs use Single Root I/O Virtualisation (SR-IOV) technology to connect virtual machines directly to physical devices, improving latency and throughput. They are able to combine the benefits of virtualisation with those of hardware acceleration. There is a one-to-one relationship between a VDF and a virtual machine (VM).

What are the advantages of DPU-based acceleration with NSX?

With SmartNICs, the NSX services (routing, switching, firewalling, monitoring) are outsourced from the hypervisor to the DPU (Data Process Unit), freeing up computing resources on the ESXi host for the applications (see Figure 2). An additional modified and specified ESXi image is installed on the DPU for this purpose. The new architecture runs the infrastructure services on the SmartNIC, providing the necessary separation between the application workloads running on the x86 computing platform and the infrastructure services. This is of enormous advantage for customers with high security and compliance requirements. Regulatory authorities such as the BSI (German Federal Office for Information Security) in particular often require separations of productive and management traffic for certain environments. 

Figure 2: ESXi and SmartNICs

 

Advantages of DPU technology with NSX


1. Network performance optimization

DPUs are specifically designed for network services, overlay technology (such as VXLAN, GENEVE, etc.), load balancing, NAT (Network Address Translation) and therefore offer better performance than traditional generic CPUs. SmartNICs uses the VMDirectPath/UPTv2 (Uniform Passthrough) data path model with the advantage of passing traffic directly from the NIC to the virtual machine without a virtual switch. 

2. Security

Security is one of the most important features of NSX. NSX Distributed Firewalling (Microsegmentation) uses a firewall engine on the ESXi hypervisor to roll out dedicated firewall rules directly to the virtual machines or containers. The NSX Distributed Firewall (DFW) acts in software and is completely independent of IP address ranges, each individual workload gets its dedicated firewall function and this from one management plane (NSX Manager). The DFW acts on layer 7, is stateful and does not require an agent for the ESXi hosts. NSX Intrusion Detection Prevention System (D-IDPS) uses technologies such as signature-based detection, behavioural analysis and machine learning to detect threats. NSX Distributed IDPS follows the same approach as NSX Distributed IDPS which means that the signatures are implemented directly in front of the dedicated workloads, also independently of IP ranges. 
SmartNICs completely offload the security functions from the NSX DFW and NSX D-IDPS to the DPU. Running network security services on a DPU provides improved performance and granular security and monitoring of network traffic. This is particularly interesting for the IDPS function, as signatures are used to directly verify the payload of a packet, thereby placing a load on the CPU.  

3. Visiblity

The DPU-based NSX solution can monitor all traffic flows directly on the network card. This means you can map full network visibility and observation, including advanced network topology views, flow and packet level capture and analysis, and IPFIX support (see figure 3). No complex port mirroring is required for this, such as so-called network TAPs or SPANs (Switch Port Analyzer).


Figure 3: Visibility with SmartNICs

 

Furthermore, because the network services running on DPUs are isolated from the ESXi components and applications, a DPU-based architecture facilitates the delineation of operational responsibilities between DevOps teams and VI administrators, who can focus on and manage host-level workloads, and NetSecOps teams, who can manage the network infrastructure and services on the SmartNIC.

4. Cost reduction 

As mentioned earlier, by offloading networking and security services to the DPUs, more host resources are freed up for workloads. As a result, more workload capacity can be provided on fewer servers without compromising the monitoring, manageability and security features that vSphere and NSX provide. 
You also benefit from operational savings by consolidating management across different workload types such as Kubernetes, containers and virtual machines, and simplifying the implementation of micro-segmentation, IDPS features and network monitoring without costly port mirroring. 

5. Sustainability and energy savings

By increasing efficiency with SmartNICs, computing tasks are offloaded from the main processors, thereby reducing energy consumption and associated CO2 emissions. 
As DPUs distribute power and efficiency to fewer servers, the number of hardware components required is reduced. This increases the lifetime of the devices and reduces the amount of waste, thus protecting the environment. 

Which DPU functions are currently supported by VMware?

Currently, the network card manufacturers NVIDIA and Pensando (AMD) support the "Distributed Service Engine" DPU function of VMware with vSphere 8 and NSX 4. The DPU cards are supplied as a complete system by the server manufacturers Dell and HPE. Lenovo will also provide servers with DPUs in the future.

NSX version 4 supports the following DPU functions (Source: https://docs.vmware.com/en/VMware-NSX/4.0.1.1/rn/vmware-nsx-4011-release-notes/index.html and https://docs.vmware.com/en/VMware-NSX/4.1.0/rn/vmware-nsx-410-release-notes/index.html ):

  • Networking:

    • Overlay and VLAN based segments

    • Distributed IPv4 and IPv6 routing

    • NIC teaming across the SmartNIC / DPU ports

  • Security

    • Distributed Firewall

    • Distributed IDS/IPS (Tech Preview)

  • Visibility and Operations

    • Traceflow

    • IPFIX

    • Packet Capture 

    • Port Mirroring

    • Statistics

  • Supported Vendors

    • NVIDIA Bluefield-2 (25Gb NIC models)

    • AMD / Pensando (25Gb and 100Gb NIC models)

  • Scale

    • Single DPU is supported per host consumed by single VDS

  • VMDirectPath (previous name UPTv2 - Uniform Passthrough):  DPU-based Acceleration for NSX supports the ability to bypass the host level ESXi hypervisor and allow direct access to the DPU which allows customers to get high level of performance while not sacrificing the features that they leverage from vSphere and NSX.

  • SmartNIC support for Edge VM: DPDK vmxnet3 driver updates to support DPU-based (SmartNIC) pNICs for datapath interfaces on Edge VM form factor. Traffic through the Edge VM will benefit from this hardware acceleration. It can only be enabled on all datapath interfaces at the same time.

Summary:
Through SmartNICs with NSX 4 and vSphere 8, VMware improves speed at the hypervisor while taking into account the current network and security requirements of modern applications. Especially in times of increased security requirements due to ransomware and other potential attacks, this is an enormous advantage and the physical isolation of the workload and infrastructure domains as well. Purchases of new dedicated hardware in the form of additional DPU network cards with their own processors must be taken into account. This must be considered accordingly in future architecture planning. These investments are offset by savings in energy costs and a minimization of the total number of servers.

Read More
Joerg Roesch Joerg Roesch

VMware Explore US 2022 - Network & Security News?

VMware Explore has taken place in San Francisco from 29th of August until 1st of September 2022. VMware Explore which was formerly known as VMworld has been rebranded. Ragu Raghuram mentioned in the keynote session that VMWorld has been renamed because VMware want to be a Multi-cloud centric and the Explore should be a Multi-cloud community event.

VMware announced a lot of news on the event, here are the most important ones from a high level view:

  • Announcements of vSphere 8, vSAN 8, NSX 4.0, TANZU Application Platform 1.3, Edge Compute Stack 2.0

  • Cloud SMART (with the areas App Platform, Cloud Management, Cloud and Edge Infrastructure, Security Networking, Anywhere Workspace)

  • Cloud Universal commercial model for Cloud Smart

  • VMware Aria - Centralized views and controls to manage the entire Multi-cloud management.

  • DPU (SmartNICs) Acceleration

  • Project Northstar - Provides centralized Network & Security management across Multi-clouds (on-prem, hybrid, public cloud) as SaaS Service

I want to set the focus in the blog post to the Network & Security announcements. As described above the most important ones are Project Northstar and the DPU-based Acceleration for NSX.

Project Northstar

The Network and Security management of a Multi-cloud environment can be complex, costly and time-consuming. VMware has announced the Project Northstar in tech preview. Northstar is a SaaS (Software-as-a-Service) service from the NSX Platform which provides Centralized Policy Management (Policy-aaS), Security Planning and Visibility (NSX Intelligenc-aaS), Network Detection and Response (NDR-aaS), Advanced Load Balancing (NSX ALB-aaS) and Workload Mobility (HCX-aaS).

Picture 1: Project Northstar

 

DPU-based Acceleration for VMware NSX

Modern Applications are driving increased I/O traffic, volume and complexity. Security threats are evolving and infrastructure is getting more distributed with Containers, VM`s, CPUs and GPUs. The IT departments have major challenges for this reason with performance, scaling and complexity. DPU-based Acceleration (also known as SmartNICs) for NSX is addressing this topic (see picture 2).

But what is a DPU or SmartNic?

A SmartNIC is a network interface card with a built-in processor, also known as DPU (Data Process Unit), that can be managed separately from the host CPU. This means that networking, security, and storage services can run directly on the NIC instead of relying on the host CPU. NSX functions Routing, Switching, Firewalling and Monitoring are completely running on the DPU/SmartNIC. There are several advantages and use cases for this solution:

  1. Free up computing resources on the host to focus on applications

  2. Enhanced network performance for network services, security and visibility

  3. Robust physical isolation of the workload and infrastructure domains

  4. Manage and Comprehensive observability for all traffic across heterogeneous workloads (No TAP or SPAN Ports mandatory)

Picture 2: DPU-based Acceleration for NSX

 

There is a new data-path model implemented named UPTv2 (Uniform Pass Through). This solution takes the advantages from the SR-IOV (Single Root I/O Virtualization) and EDP (Enhanced Data Path) data-path model together. For more details have a look to following video:

Deep Dive

For SmartNICs or DPU implementations there are no changes of the NSX key concepts mandatory. Security policies are enforced at the vNIC level and firewall rules, groups, services, etc. still managed as before. The NSX workflows and the API are also unchanged.

There is also an excellent demo video available for DPU-based Acceleration for VMware NSX:

DEMO

Project Watch

The Project Watch is a new approach to multi-cloud networking and security with enhanced app-to-app policy controls. This solution extends existing security systems for a continuous risk and compliance assessment. Project Watch is available in tech preview and covers compliance and security challenges to continuously observe, assess, and dynamically mitigate risk and compliance problems in multi-cloud environments.

Project Trinidad

This project covers the extension of VMware API security and analytics by deploying sensors on Kubernetes clusters. Machine Learning (ML) with business logic inference is used to detect anomaly traffic of east-west communication between microservices.

Expansion of Network Detection to the VMware Carbon Black Cloud Endpoint

VMware is strengthening its lateral security capabilities by embedding network detection and visibility into Carbon Black Cloud's endpoint protection platform, which is now available to select customers in early access. This extended detection and response (XDR) telemetry adds network detection and visibility to endpoints with no changes to infrastructure or endpoints, providing customers with extended visibility into their environment across endpoints and networks leaving attackers nowhere to hide.

Ransomware Recovery in VMC

VMware announced at VMware Explore also a ransomware recovery-as-a-service solution from VMware Cloud on AWS (VMC). This is a new approach for a safe recovery that prevents reinfection of IT and line-of-business production workloads through an on-demand environment recovery.

Summary

There were also a lot of other announcements and news on the VMware Explore, like new NSX Advanced Load Balancer bot management capabilities, SASE (Secure Access Service Edge) new web proxy-based connectivity to VMware Cloud Web Security or that the NSX Gateway Firewall now offers a new stateful active-active edge scale-out capability that significantly increases network throughput for stateful services.

Feel free to add comments if you have seen other important announcements or technical innovations on VMware Explore US 2022. Hopefully see you on the VMware Explore Europe in Barcelona!!!

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMware Explore (VMworld) Network & Security Sessions 2022

After two remote remote events (VMworld 2020 and 2021) the VMware events is finally back onsite. And there is also a rebrand, VMworld is renamed to VMware Explore. The event is will be taken place in the cities San Francisco (29th of August until 1st of September 2022), Barcelona (7th of November until 10th of November 2022), Sao Paulo (19th of October until 20th of October), Singapore (15th of November until 16th of November 2022), Tokyo (15th of November until 16th of November 2022) and Shanghai (17th of November until 18th of November 2022). I provide recommendations within this blog post about some deep dive sessions related to Network & Security sessions. I have excluded certifications and Hands-on-Labs sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

A full event pass for VMware Explore will be $2,195 for the US event and €1,475 for the Europe event. The full event pass has following advantages:

Full Event passes provide the following benefits:

  • Access to The Expo

  • Participation in hands-on labs

  • Entry to the welcome reception and hall crawl

  • Entry to the VMware Explore 2022 Party

  • Discounts on training and certification

  • Meals as provided by VMware Explore

  • VMware Explore-branded promotional item

  • Networking lounges

  • Meeting spaces available on demand

  • Attendance at general session and breakout sessions (Note: Some sessions require valid Partner status)

  • Please note: Discounts are not applicable (ex: VMUG)

VMworld Session Recommendations

Now I come to my session recommendations which are based on my experience and some very good know speakers from the last years and about topics which are interesting from Network and Security point of view. But first I have to say that every VMware Explore session is worth to join and customers, partners and VMware employees have taken much efforts to prepare some very good content. For me the VMware Explore sessions are the most important source to get technical updates, innovation and training. All sessions can be also watched after VMware Explore. I also have to mentioned at this time that I still can't get used to the new name VMware Explore. I loved the brand VMWorld:-( The recommendation are based on the US content catalog but a lot of session will be also available on the other locations. The letter in bracket like NET2233US stands for NET = Network or SEC = Security. US indicated that it is a USA session. Sometimes you see also an letter D behind US, this means that it is not a in person session, D stands for distributed.

Network & Security Solution Key Note

Network & Security Multi Cloud Sessions

NSX Sessions - Container related

Security Sessions

NSX Sessions - Infrastructure related

NSX Sessions - Operation and Monitoring related

NSX Sessions - Advanced Load Balancer (AVI) related

SD-WAN and SASE

SMARTNICS - Project Monterey

Summary

There are a lot interesting VMware Explore sessions, also for many other topics like Cloud, Edge, Container, End User Computing, vSphere, Blockchain, etc.

Feel free to add comments below if you see other mandatory sessions within the Network & Security area. I wish you a lot of Fun at VMware Explore 2022!

Read More
Joerg Roesch Joerg Roesch

Edge Network Intelligence - An AIOps solution to monitor end user and IoT performance

VMware Edge Network Intelligence (ENI) is a vendor agnostic Artificial intelligence (AI) and Machine Learning (ML) solution that ensures end user and IoT (Internet of Things) client performance, security and self-healing through wireless and wired LAN, SD-WAN and Secure Access Service Edge (SASE). The product Edge Network Intellige (ENI) came to VMware with the Nyansa acquisition in January 2020. The product is available with SD-WAN and SASE (Secure Access Service Edge) or as a standalone deployment. It is a end-to-end monitoring and troubleshooting solution.

What makes ENI unique?

Most of the companies today have several monitoring and troubleshooting tools. A lot of management tools are vendor specific, unflexible and not user focused. Silos between compute, network, storage and security teams make it more difficult. Finger pointing in case of performance issues is not an exception. Often it is not possible to install a management agent to IoT devices due to regulation restrictions. All this makes it time consuming, costly and reactive.

Edge Network Intelligence collects data from the end device, application and network which includes client, LAN, WAN, Wireless, Firewalls and applications (see picture 1). ENI provides following features:

  1. Analyzes every wired and wireless user transaction

  2. Proactive monitoring from user and device incidents

  3. Benchmarking

  4. Displays user performance before and after a network change

  5. Root cause analytics for user and network problems

  6. Site by site user experience comparisions

  7. IoT device monitoring

Picture 1: Edge Network Intelligence (ENI) Overview

How is ENI designed?

Initially ENI measures user and IoT device experience and behavior. The system creates a baseline with this inputs and detects anomalies if outliers take place. As a next step proactively recommendations and predict benefits will be created based on machine learning to correlate across the application stack. Finally remediation will be realized with self-healing networking and policy violations feedback. With this design the system provides deep insights and analytics function for whole customer environments with a end-to-end monitoring solution.

ENI is quiet simple to understand and useable. On the left sidebar there are the areas Dashboards, Incidents, Analysis, Inventory, Report Management and dedicated account setttings visible.

Dashboards

From default there is the dashboard “summary” (see picture 2) and the dashboard “Global” (see picture 3) available. The dashboard “summary” shows informations about the top 5 applications issues and the the top 5 application traffic utilization. It is always possible to zoom in to get more details. Advisories, problematic clients and WI-FI Health by group round up the “summary” dashboard. The dashoards are showing live datas.

Picture 2: Dashboard - Summary

The “global” dashoard (see picture 3) provides a nice grahical overview with a incidents and performance sidebar. From this starting point it is possible to click to dedicated sites or to jump in special problems, i.e. on site “Newcastle” 44 % are Wi-Fi Affected. Next step could be to check the “Newcastle” dashboard if there is a outage of a wireless controller or any other problems.

Picture 3: Dashboard - Global

Service Desk Feature

Other good opportunity to fix an user problem is the “service desk” option. The troubleshooting can be started with device username, IP addresses, hostname or MAC address. After the time range is specified it can be selected what problem has been reported, like Network is Slow, Internet Connection Dropped, Can`t Connect to Wi-Fi, Wi-Fi Disconnecting, Poor/Weak Wi-FI, Poor Video Conference, Application Trouble or other items. Afterwards the system provides indications if the issue is related to a know problem.

Picture 4: Service Desk - Step 1

Picture 5: Service Desk - Step 2

Incidents

Under the incidents view (see picture 6) the severity is structured in P1 (critical), P2 (High), P3 (Medium) or P4 (Low). There is a option to filter per priority or via type. Some problems are visible under more than one type, i.e. Client has poor Wi-FI performance can be shown under type application and type WI-FI. The time range can be changed in the top right corner.

Picture 6: Incidents view

After a dedicated problem is choosen there are more details visible (see picture 7). On the timeline map the baseline is compared to the issue. Incident summary, potential root causes, next steps, affected clients, symptoms, top client properties and client properties provides a lot of valuable input to fix the outage. The graphs under client properties are shown more specific parameters, like Access Points Groups, Gateway, OS, Model, DHCP Server, Acceess Point and VLAN ID.

Picture 7: Troubleshooting from a dedicated problem

Analysis

Analysis can be done from Network History, Benmarks or Health Remediation point of view.

In the dashboard Network History there is a timeline which shows how many user are affected. This can be sorted by different metrics, like Client Web performance, Clients not connecting to DNS, etc. The purple line is the indication about a change which has been realized in the evironment. The time period can be also changed in the right top corner.

Picture 8: Network History view

Special dashboards for industry and internal benchmarking are also availalbe in the system.

Health & Remediations

The Health & Remediations view is very useful to check reported user problems like “WebEx is slow”. A metric can be choosen to investigate which clients, access points and custom groups are affected.

Picture 9: Health and Remediations view

Inventory
The inventory field is also a very good feature. Especially for the IoT devices it is very useful to get input about devices types, OS, IP addresses, MAC addresses, hostname, category, application, access points, switches, VLANs, servers, etc. For medical devices as an example, device information like infusion pump or a Compute Tomograph (CT), OS system, which protocols are used, IP addresses, MAC addresses, etc. are shown.

Reports can be also created with different metrics. The reports can be done directly generated or scheduled.

Summary

Edge Network Intelligence (ENI) is a nice and easy way to monitor and troubleshoot end user and IoT client problems. The main advantage is that the system is designed to troubleshoot from user point of view, i.e. search in the system for “WebEx is slow”. You can get the solution with VMware SD-WAN, SASE or as standalone. If you want to check it out for free go to https://sase.vmware.com/products/edge-network-intelligence/edge-network-intelligence-demo.

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMworld Network & Security Sessions 2021

VMworld 2021 will be taken place this year again remotely from 5th of October 2021 until 7st of October 2021. I provide recommendations within this blog post about some deep dive sessions related to Network & Security sessions. I have excluded general keynotes, certifications and Hands-on-Labs sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

The big advantage of a remote event is that everyone can join without any traveling, big disadvantage is indeed the social engineering with some drinks:-) Everyone can register for the general pass without any costs. There is also the possibility to order a Tech+ Pass which includes additional benefits like more sessions, discussions with VMware engineers, 1 to 1 expert sessions, certification discount, etc. The Tech+ Pass costs $299, a lot of good sessions are only available with this pass. From my point of view it is worth to order this pass.

VMworld Session Recommendations

Now I come to my session recommendations which are based on my experience from the last years and about topics which are interesting from Network and Security point of view. But first I have to say that every VMworld sessions is worth to join and especially with COVID-19 this year were a lot of applications from customers, partner and VMware employees. For me are the VMworld sessions the most important source to get technical updates, innovation and training. All sessions can be also watched after VMworld.

NSX Sessions - Infrastructure related

  • Enhanced Data Center Network Design with NSX and VMware Cloud Foundation [NET1789]

  • NSX-T Design, Performance and Sizing for Stateful Services [NET1212]

  • Deep Dive on Logical Routing in NSX-T [NET1443]

  • Deep Dive: Routing and Automation Within NSX-T [NET1472]

  • High Availability and Disaster Recovery Powered by NSX Federation [NET1749]

  • Design NSX-T Data Center Over Cisco ACI Site and Multisite [NET1480]

  • NSX-T Edge Design and ACI Multi-Site [NET1571]

  • Getting Started with NSX Infrastructure as Code [NET2272]

  • NSX-T and Infrastructure as Code [CODE2741]

  • 7 Key Steps to Successfully Upgrade an NSX-T Environment [NET1915]

  • Service Provider and Telco Software-Defined Networking with VMware NSX [NET1952]

  • Self-Service Will Transform Modern Networks [NET2689]

NSX Sessions - Operation and Monitoring related

  • NSX-T Common Support Issues and How to Avoid Them [NET1829]

  • Automated Problem Resolution in Modern Networks [NET2160]

  • Simplify Network Consumption and Automation for Day 1 and Day 2 Operations [NET2185]

  • Network Operations: Intelligence and Automation from Day 0 to Day 2 [NET2697]

  • A Guide to Application Migration Nirvana [MCL1264]

NSX Sessions - NSX V2T Migration related

  • NSX Data Center for vSphere to NSX-T Data Center – Migration Approaches [NET1211]

  • NSX Data Center for vSphere to NSX-T: Simon Fraser University Case Study [NET1244]

NSX Sessions - Advanced Load Balancer (AVI) related

  • Architecting Datacenter Using NSX and AVI [VMTN2861]

  • Best Practices on Load Balancer Migrations from F5 to VMware [NET2420]

  • Get the Most Out of VMware NSX Data Center with Advanced Load Balancing [NET1791]

  • Ask Me Anything on Automation for Load Balancing [NET2220]

  • Ask Me Anything on Load Balancing for VMware Cloud Foundation and NSX [NET2186]

  • Ask Me Anything on Automation for Load Balancing [NET2220]

NSX Sessions - Container related

  • NSX-T Container Networking [NET1282]

  • NSX-T Reference Designs for vSphere with TANZU [NET1426]

  • Better Secure Your Modern Applications with No Compromise on Speed and Agility [NET1730]

  • Bridge the Lab-to-Prod Gap for Kubernetes with Modern App Connectivity [APP2285]

  • Container Networking Runs Anywhere Kubernetes Runs – From On-Prem to Cloud [NET2209]

  • Kubernetes Security Posture Management [SEC2602]

NSX Security Sessions

  • Never Trust: Building Zero Trust Networks [NET2698]

  • Simplify Security Complexity [SEC2732]

  • Data Center Segmentation and Micro-Segmentation with NSX Firewall [SEC1580]

  • Macro- to Micro-Segmentation: Clearing the Path to Zero Trust [SEC1302]

  • Creating Virtual Security Zones with NSX Firewall [SEC1790]

  • NSX Advanced Threat Prevention: Deep Dive [NET1376]

  • NSX IDS/IPS – Design Studio [UX2555]

  • NSX TLS Inspection – Desgin Studio [UX2578]

  • End to End Network Security Architecture with VMware NSX [SEC1583]

  • Demystifying Distributed Security [SEC1054]

  • Visualize Your Security Policy in Action with NSX Intelligence [SEC2393]

  • Network Detection and Response from NSX Intelligence [SEC1882]

  • Addressing Malware and Advanced Threats in the Network [SEC2027]

  • A Tale of Two Beacons: Detecting Implants at the Host and Network Levels [SEC2587]

  • Mapping NSX Firewall Controls to MITRE ATT&CK Framework [SEC2008]

Network & Security and Cloud

  • Innovations in Securing Public Cloud [SEC2709]

  • Multiple Clouds, Consistent Networking [NET2389]

  • Radically Simplifying Consumption of Networking and Security [NET2388]

  • Innovations in Better Securing Multi-Cloud Environments [SEC2608]

  • Better Secure Network Connectivity Between Public and Private Clouds: Panel [NET2687]

  • Security for Public Cloud Workloads with NSX Firewall [SEC2283]

  • Azure VMware Solution: Networking, Security in a Hybrid Cloud Environment [MCL2404]

  • Cloud Workload Security and Protection on VMware Cloud [SEC1296]

  • Automation HCX Migrations [CODE2806]

Intrinsic Security with VMware Carbon Black

  • America`s Insurgency: The Cyber Escalation [SEC2670]

  • Anatomy of the VMware SOC [SEC1048]

  • Building your Modern SOC Toolset [SEC2642]

  • Better Secure Remote Workers with VMware Carbon Black Cloud [SEC2666]

  • Cloud Workload Protection, Simplified [SEC2601]

  • Ask the VMware Threat Analysis Unit: Common Mistakes Seen During IR [SEC2676]

  • Automating Ransomware Remediation with the VMware Carbon Black Cloud SDK [CODE2787]

  • How to Prevent Ransomware Attacks [SEC2659]

  • How to Evolve Your SOC with the MITRE ATT&CK Framework [SEC2664]

  • DDoS Deep Dive [SEC3041S]

SD-WAN and SASE

  • VMware SASE: What`s New and What`s Next [EDG1647]

  • Multi-Cloud Networking with VMware SD-WAN [NET1753]

  • Consuming Cloud Provider SASE Services [EDG1304]

  • Cloud First: Secure SD-WAN & SASE – Complete & Secure Onramp to Multi-Cloud [EDG2813S]

  • Deliver Reliability, Better Security and Scalability with Edge Computing and SASE [EDG2417]

  • VMware SD-WAN 101 and Federal Use Cases [EDG1699]

  • VMware SD-WAN: Real Live from the Field [NET1109]

  • Help Protect Anywhere Workforce with VMware Cloud Web Security [EDG1168]

  • Containerized Applications at the Edge Using VMware Tanzu and SASE [EDG2325]

  • How Healthcare is More Securely Delivering Better Patient Experiences [EDG1965]

  • Extend SD-WAN Visibility and Analytics with vRealize Network Insight [EDG1345]

  • AIOps for SASE: Self-Healing Networks with VMware Edge Network Intelligence [NET1172]

  • AIOps for Client Zoom Performance with VMware Edge Network Intelligence [NET1169]

SMARTNICS - Project Monterey

  • Project Monterey: Present, Future and Beyon [MCL1401]

  • 10 Things You Need to Know About Project Monterey [MCL1833]

  • Partner Roundtable Discussion: Project Monterey – Redefining Data Center Solutions [MCL2379]

  • Accelerate Infrastructure Functions and Improve Data Center Utilization [NET2874S]

Summary

There are a lot interesting VMworld sessions, also for many other topics like Cloud, Container, End User Computing, vSphere, etc.

Feel free to add comments below if you see other mandatory sessions within the Network & Security area. I wish you a lot of Fun for VMworld 2021 and hopefully see you onsite again in 2022!

Read More
Joerg Roesch Joerg Roesch

Absicherung medizinischer Geräte mit VMware SD-WAN und NSX (German Version)

In den letzten Jahren haben wir eine Vielzahl von Cyberangriffen wie Ransomware, Malware, Trojaner usw. im Gesundheitssektor erlebt. Zuletzt gab es einen Ransomware-Angriff auf das irische medizinische Gesundheitssystem (https://www.heise.de/news/Ransomware-legt-IT-des-irischen-Gesundheitswesens-lahm-6046309.html). Die Hauptziele der Hacker sind primär medizinische Geräte wie Infusionspumpen, Computertomographen (CT), Magnetresonanztomographen (MRT), PACS-Systeme (Picture Archiving and Communication System), etc.

Warum sind medizinische Geräte ein beliebtes Ziel für Hacker?

Hierfür gibt es mehrere Gründe. Die medizinischen Geräte haben oft Sicherheitslücken und die Patches sind nicht auf einen aktuellen Stand. Ein Grund ist die unterschiedliche Hardware, sowie das es spezielle Betriebs- und Anwendungssysteme gibt. Es ist nicht damit getan, darüber nachzudenken, wie kann ich eine spezielle Windows- oder Linux-OS-Version absichern. Krankenhäuser oder Kliniken müssen eine Menge verschiedener Software und Endgeräte verwalten. Medizinische Geräte verwenden spezielle Protokolle wie DICOM (Digital Imaging and Communications in Medicine) und oft ist es aus regulatorischer Sicht (wie z.B. ISO-Standards, BSI-Zertifizierungen, etc.) nicht erlaubt, Änderungen an diesen Geräten vorzunehmen.

Das hohe Cybersicherheitsrisiko ist ein weiterer Grund, warum medizinische Geräte für Hacker ein beliebtes Ziel sind. Die Auswirkungen eines Angriffs im Gesundheitssektor können sehr kritisch sein. Das Ergebnis kann in einem Ausfall von medizinischen Geräten, Gefahr für den Patienten, Verlust von persönlichen Patientendaten oder Unterbrechungen im Klinik-alltag enden. Die Covid Pandemie macht diese Situation noch realer und gefährlicher.

Was sind die ursprünglichen Anwendungsfälle für SD-WAN und NSX?

Bevor ich auf die Absicherung der medizinischen Geräte komme, will ich die generellen Anwendungsfälle von VMware SD-WAN und NSX beschreiben.

VMware SD-WAN optimiert den WAN-Verkehr mit dynamischer Pfadauswahl. Die Lösung ist transportunabhängig (Breitband, LTE, MPLS), einfach zu konfigurieren und zu verwalten (siehe Abbildung 1). Um eine gesicherte Verbindung zwischen Standorten, zu Public Cloud Service (wie z.B. Office 365) zu gewährleisten, verwendet SD-WAN ein gesicherte Overlay Technologie.

VMware SD-WAN nutzt die Dynamic Multi-Path Optimization (DMPO)-Technologie, um eine gesicherte Anwendungsperformance und einheitliche QoS-Mechanismen über verschiedene WAN-Verbindungen hinweg zu gewährleisten. DMPO verfügt über 4 Schlüsselfunktionalitäten - kontinuierliche Überwachung, dynamische Anwendungssteuerung, bedarfsgerechte Korrektur und anwendungsspezifisches Overlay-QoS.

Abbildung 1: VMware SD-WAN Übersicht

Abbildung 1: VMware SD-WAN Übersicht

 

DMPO kann die Leitungsqualität für Standorte mit zwei oder mehreren Verbindungen wie MPLS, LTE, Standleitungen usw. verbessern. Die WAN-Optimierung ist jedoch auch für Standorte mit einer einzigen WAN-Verbindung sinnvoll und bietet hierzu ebenfalls Verbesserungspotenzial.

Die NSX Software-Defined Networking Lösung wurde entwickelt, um Sicherheit, Netzwerktechnik, Automatisierung und Multi-Cloud-Konnektivität für virtuelle Maschinen, Container und Baremetal-Server bereitzustellen.

NSX-T bietet Netzwerkfunktionen mit GENEVE Overlay-Technologie, um Switching und Routing auf einer verteilten Hypervisor-Ebene zu realisieren. NSX bietet Multi-Hypervisor-Unterstützung (ESXi, KVM) und verfügt über eine richtlinien-gesteuerte Konfigurationsplattform (API gesteuert). Routingprotokolle wie BGP (Border Gateway Protocol) und OSPF (Open Shortest Path First) für Nord-Süd-Routing, NAT, Load Balancing, VPN, VRF-LITE, EVPN, Multicast, L2-Bridging und VPN können mit NSX implementiert werden.

Sicherheit mit Mikro-Segmentierung ist der primären Anwendungsfälle für NSX. Eine dedizierte Firewall sitzt vor jeder virtuelle Maschine (VM). Es existieren keine Abhängigkeiten zu IP-Adressbereichen oder VLAN Konfigurationen, dadurch ist eine Implentierung in einer bestehenden Infrastruktur mit minimalen Aufwand möglich. Eine Firewall, Deep-Paket-Inspection (DPI) und Context Engine auf jedem Hypervisor realisieren eine hochperformante Service Defined Firewall auf L7 Stateful Firewalling Basis. ESXi-Hosts benötigen keinen Agenten für die Mikro-Segmentierung, die bestehende ESXi-Management-Verbindung wird zum Ausrollen bzw. Verwalten von Firewall-Regeln genutzt.

IDPS (Intrusion Detection and Prevention System) ist seit NSX-T Version 3.x ebenfalls möglich, einschließlich einer verteilten Funktion (siehe für weitere Details meinen Blogbeitrag https://www.securefever.com/blog/nsx-t-30-ids). Andere Sicherheitsfunktionen wie URL-Analysen, Gateway Firewall für Nord/Süd-Sicherheit und Sicherheitsintegrationen von Drittanbietern in NSX sind ebenfalls inkludiert.

Multi-Cloud Technologie ist ein weiterer Anwendungsfall für NSX. Dies beinhaltet Szenarien zwischen firmeneigenen Rechen-zentren, um Disaster Recovery oder hochverfügbare Netzwerk- und Sicherheitslösungen zu gewährleisten, oder für hybride Public Cloud Lösungsansätze zwischen dem eigenen Rechenzentrum und der Public Cloud.

Container Networking und Sicherheit ist ein weiterer wichtiger Anwendungsfall für NSX. Dies kann mit dem dedizierten NSX NCP Container Plugin oder mit dem seit kurzem von VMware neu geschaffenen offenen Standard CNI (Container Network Interface) namens Antrea erreicht werden.

Weitere Detailinformationen zu NSX finden Sie im VMware NSX-T Reference Design Guide für NSX-T Version 3.x https://communities.vmware.com/docs/DOC-37591 und im NSX-T Security Reference Guide https://communities.vmware.com/t5/VMware-NSX-Documents/NSX-T-Security-Reference-Guide/ta-p/2815645

Wie sichert VMware SD-WAN und NSX medizinische Geräte ab?

Die Kombination aus SD-WAN und NSX macht die Lösung einzigartig. SD-WAN-Hardware-Edge-Boxen agieren als Gateway für die medizinischen Geräte, sowie für die sichere Transportverbindung zum Rechenzentrum. Innerhalb des Rechenzentrums sichert NSX die medizinischen Server ab. Diese Absicherung ist unabhängig vom Formfaktor, sprich ob es sich um eine virtuelle Maschine, einen Bare-Metal-Server oder eine Container-basierte Lösung handelt.

Die Lösung ist einfach zu installieren und zu betreiben, alle SD-WAN-Komponenten (Hardware und Software) werden vom SD-WAN Orchestrator (VCO) aus verwaltet. Die Konfiguration ist flexibel, es können z.B. globale Richtlinien implementiert werden. Die Komponenten im Rechenzentrum sind softwarebasiert und können einfach skalieren. Ein NSX Manager-Cluster wird für die Management- und Steuerungsebene für NSX innerhalb des Rechenzentrums etabliert.

1. SD-WAN Edge Komponente vor dem medizinischen Gerät

Der erste Zugangspunkt bzw. das Standard-Gateway der medizinischen Geräte ist die Edge-SD-WAN-Hardwarekomponente (siehe Abbildung 2). Alternativ ist es möglich einen L2-Switch oder WLAN-Access-Controller hinter dem Edge zu platzieren, wenn Sie mehrere medizinische Geräte in einem Bereich haben. Eine Firewall auf dem SD-WAN-Edge übernimmt die Zugriffs-sicherheit der medizinschen Geräte und die Verbindungen zwischen verschiedenen medizinischen Geräten hinter demselben SD-WAN-Edge.

Abbildung 2: Netzwerktopologie

 

2. Sichere Transportverbindung

Der SD-WAN-Edge am medizinischen Gerät baut einen dedizierten Tunnel zu einem SD-WAN-Edge im Rechenzentrum auf. Das lokale Netzwerk (LAN) bildet das Transportnetzwerk. Das VeloCloud Multi-Path Protocol (VCMP) wird verwendet, um einen IPSec-gesicherten Transporttunnel über Port UDP 2426 aufzubauen. Der SD-WAN-Edge im Rechenzentrum kann optional in Hardware implementiert werden, aber der einfachste Weg ist, ihn auf VM-basierten Formfaktor aufzubauen. Die VM hat eine WAN-Schnittstelle, um den IPSec-Endpunkt abzubilden, und eine LAN-Schnittstelle, um eine Verbindung zu den virtuellen Maschinen der medizinischen Geräte oder Bare Metal Server, wie DICOM- oder PACS-Server herzustellen. Wenn medizinische Geräte mit anderen medizinischen Geräten auf anderen SD-WAN-Edges kommunizieren müssen, wird der IPSec-Tunnel direkt ohne eine Umleitung zum Rechenzentrums-Edge hergestellt.

3. SD-WAN im Rechenzentrum mit Übergabe zu NSX

Wenn kein NSX Overlay (Routing & Switching) innerhalb des Rechenzentrums vorhanden ist, kann die bestehende vSphere Netzwerk Implementierung (siehe Abbildung 3) genutzt werden. Alle NSX-Sicherheitsfunktionen können ohne Änderungen im Routing- und Switching Bereich konfiguriert werden. Die VM`s oder Bare Metal Server nutzen den SD-WAN Edge als Gateway. Die NSX-Sicherheit ist unabhängig von der Netzwerkinfrastruktur und die NSX Distributed Firewall ist dediziert für jede VM-Schnittstelle (vnic) vorhanden. Das bedeutet, dass der Datenverkehr zwischen verschiedenen VMs gesichert werden kann und es keine Rolle spielt, ob die virtuellen Maschinen zum selben IP-Bereich gehören oder nicht. Wenn NSX Overlay vorhanden ist, wird eine Routing-Verbindung zwischen den SD-WAN-Edges und den NSX-Edges aufgebaut (siehe Bild 3). Dies kann über BGP, OSPF oder statisches Routing realisiert werden. Zur Absicherung des Nord-Süd-Verkehrs oder des Mandanten-Verkehrs kann eine NSX-Gateway-Firewall konfiguriert werden.

Abblidung 3: NSX Overlay Technologie

Abblidung 3: NSX Overlay Technologie

 

Betrieb und Überwachung

Der SD-WAN Orchestrator (VCO) kümmert sich um die Konfiguration und Administration der SD-WAN Edge Boxen. Der SD-WAN Orchestrator ist als SaaS-Service oder On-Premise verfügbar. Der SaaS-Service ist wesentlich einfacher zu implementieren und zu administrieren. Lediglich "Meta-Daten" werden vom SD-WAN Edge über einen verschlüsselten Tunnel zu Reporting- und Monitoring Zwecken an den Orchestrator (VCO) gesendet. Der VCO stellt auf seiner grafischen Benutzeroberfläche eine Überwachungsplattform mit den von den Edge-Geräten empfangenen Daten bereit. Die Edge-Geräte sammeln Informationen wie Quell-IP-Adresse, Quell-MAC-Adresse, Quell-Client-Hostname, Netzwerk-ID usw. Die SD-WAN-Hardware-Edges, die vor den medizinischen Geräten platziert werden, können dank eines Zero-Touch-Provisioning-Ansatzes von einer Nicht-IT-Person einfach aktiviert werden (siehe Abbildung 4). Der Prozess wird in 3 Schritten realisiert. Im ersten Schritt fügt der IT-Administrator die Edge zum VCO hinzu und erstellt einen Aktivierungsschlüssel. Der zweite Schritt ist der Versand des Geräts und eine E-Mail mit dem Aktivierungsschlüssel vom IT-Administrator an den lokalen Kontakt. Der letzte Schritt der Bereitstellung besteht darin, dass der Ansprechpartner vor Ort das Gerät mit Internetzugang anschließt und es mit dem dedizierten Schlüssel aktiviert, der per E-Mail gesendet wurde. Danach hat der IT-Administrator Zugriff auf den Edge und kann weitere Konfigurationen vornehmen, um die Verbindung zum medizinschen Geräte herzustellen.

Abbildung 4: “Zero Touch Provisioning” der SD-WAN Edge Komponenten

Abbildung 4: “Zero Touch Provisioning” der SD-WAN Edge Komponenten

 

Die gesamte Lösung kann mit vRealize Network Insight (vRNI) überwacht werden. vRNI kann Flows von physikalischen Netzwerkgeräten (Router, Switches, Firewall), SD-WAN, virtueller Infrastruktur (NSX, vSphere), Container und Multi-Cloud (AWS, Azure) einsammeln. Das Tool bietet Funktionen zur Fehlersuche und ist sehr hilfreich für das Erzeugen des Firewallregelsatzes. Die Visiblität einzelner Verbindungen ist ebenfalls dargestellt, siehe Abbildung 5 mit einem “Packet Walk” von einer dedizierten Quell-IP zu einer dedizierten Ziel-IP, die Pfaddetails sind ebenfalls sehr nützlich.

Abbildung 5: Pfad-Topologie mit vRealize Network Insight (vRNI)

Abbildung 5: Pfad-Topologie mit vRealize Network Insight (vRNI)

 


Zusammenfassung

Die Lösung ist einfach zu installieren und zu betreiben. SD-WAN und NSX verfügen standardmäßig über sehr viel Automatisierung, was es sehr flexibel und skalierbar macht. Die wichtigsten Merkmale sind:

SD-WAN

  • Zugriffsschutz der medizinischen Geräte durch MAC-Authentifizierung.

  • Es sind keine Änderungen am medizinischen Gerät erforderlich.

  • “Zero Touch Provision” der Edges für eine einfache Inbetriebnahme

  • Trennung der Ports an einem Edge durch verschiedene VLANs.

  • Zentrale Administration.

  • Verwendung von globalen Richtlinien für eine vereinfachte Administration.

  • Sichere Transportverschlüsselung vom Edge zum Rechenzentrum

  • Hardware oder Software möglich (im Rechenzentrum)

NSX

  • Schutz innerhalb des Rechenzentrums mit Distributed Firewall und Mikrosegmentierung auf VM-, Container- oder Baremetal-Server-Ebene.

  • Granulare Kontrolle des Zugriffs auf jeder VM

  • ESXi-Hosts benötigen keinen Agenten

  • Firewall-Regeln werden mit dem vMotion-Prozesses verschoben.

  • Routing-Instanzen können getrennt werden

  • NSX IDPS (Intrusion Detection Prevention System) mit einer verteilten Funktion verfügbar

  • Zentrale Administration und Überwachung

  • ESXi-Hosts an entfernten Standorten können mit NSX zentral administriert werden.

Es gibt eine weitere sehr interessante Anwendung namens Edge Network Intelligence (ehemals Nyansa). Dieses Tool ist für das Gesundheitswesen sehr interessant ist, wenn es um KI-gestützte Performanceanalysen für Netzwerke und Endgeräte geht. Ich werde in den nächsten Wochen einen weiteren Blog-Beitrag zu diesem Thema erstellen.

Read More
Joerg Roesch Joerg Roesch

Secure Medical Devices with VMware SD-WAN and NSX

German version here: https://www.securefever.com/blog/secure-medical-devices-with-vmware-sd-wan-and-nsx-mdtpt

In the last years we have seen a lot of cyber attacks like ransomware, malware, trojaner, etc. within the healthcare sector. In the recent past a ransomware attack to the Irish medical healthcare system (https://www.bbc.com/news/world-europe-57197688) has happened. The main hacking targets are the medical devices like Infusion pumps, Computertomographs (CT), Magnetic resonance imaging (MRT), PACS systems (Picture Archiving and Communication System), etc.

Why medical devices a popular target for hackers?

There are several reasons for this. The medical devices have often security vulnerabilities and the patches are not up-to-date because there is different hardware, special operation and application systems in place. It is not just to think about how can I secure a dedicated Windows or Linux OS version. Hospitals or clinics have to manage a lot of different software and end devices. Medical devices use special protocols like DICOM (Digital Imaging and Communications in Medicine) and often it is not allowed from regulation perspective (i.e. ISO standards, BSI certifications, etc.) to make changes on this appliances.

Another main reason why medical devices are a popular target for hackers is the high cyber security risk. An impact of a attack within the healthcare sector can be really critical. The result can be end in a outage of medical devices, danger for the patient, loss of personal patient data or interruptions within the daily hospital work. The Covid-19 pandemic makes this more real and critical.

What are the use cases for SD-WAN and NSX?

Before I will describe how VMware secures medical devices, I want to provide an overview about the common use cases for VMware SD-WAN and NSX.

VMware SD-WAN optimizes WAN traffic with dynamic path selection. The solution is transport-independent (Broadband, LTE, MPLS), simple to configure and manages and provides a secure overlay (see picture 1). VMware SD-WAN use Dynamic Multi-Path Optimization (DMPO) technology to deliver assured application performance and uniform QoS mechanism across different WAN connections. DMPO has 4 key functionalities - continuous monitoring, dynamic application steering, on-demand remediation and application-aware overlay QoS.

Picture 1: VMware SD-WAN Overview

Picture 1: VMware SD-WAN Overview

 

DMPO can improve line quality for locations with two or more connections like MPLS, LTE, leased lines, etc. The WAN optimization is also useful for locations with a single WAN connection.

NSX Software-Defined Networking solution is designed to provide Security, Networking, Automation and Multi-Cloud Connectivity for Virtual Machines, Container and Baremetal Server.

NSX-T provides Networking function with GENEVE Overlay technology to realize switching and routing on a distributed hypervisor level. NSX is capable for Multi-hypervisor support (ESXi, KVM) and has a policy-driven configuration platform. Network services like BGP and OSPF for North-south routing, NAT, Load Balancing, VPN, VRF-LITE, EVPN, Multicast, L2 Bridging and VPN can be implemented with NSX.

Security with Micro-segmentation is the key driver for NSX. The distributed firewall is sitting in front of every VM and east-west security can be realized without dependencies to IP address ranges or VLAN`s technology. A Firewall, Deep-Paket-Inspection (DPI) and Context Engine on every hypervisor are realizing an high performance service defined firewall on L7 stateful firewalling basis. ESXi Hosts don`t need a agent for Micro-Segmentation, the existing ESXi mangement connection is used to push firewall rules.

IDPS (Intrusion Detection and Prevention System) is also possible since NSX-T version 3.x, including a distributed function (see for more details my blog post https://www.securefever.com/blog/nsx-t-30-ids). Other security feature like URL Analyses, Gateway Firewall for North/South Security and third partner vendors security integrations to NSX are as well included.

Multi-cloud networking is also a use case for NSX. This could be between private data centers to design disaster recovery or high availability network and security solutions or for hybrid public cloud connectivity.

Container Networking and Security is another important use case for NSX. This can be achieved with a dedicated NSX NCP Container Plugin or from the recent past with a new VMware created open standard CNI (Container Network Interface) named Antrea.

Check for more detail information about NSX the VMware NSX-T Reference Design guide for NSX-T version 3.x https://communities.vmware.com/docs/DOC-37591 and the NSX-T Security Reference Guide https://communities.vmware.com/t5/VMware-NSX-Documents/NSX-T-Security-Reference-Guide/ta-p/2815645

How can VMware SD-WAN and NSX can help to secure medical devices?

The combination of SD-WAN and NSX makes the whole solution unique. SD-WAN hardware Edge boxes are taking care about the initial access of the medical devices and the secure transport connection to the data center. Within the data center NSX is the feature to secure the medical devices servers independently if it is a virtual machine, bare metal server or container based solution.

The solution is simple to install and operate, all SD-WAN components (hardware and software) will be managed from the SD-WAN Orchestrator. It is flexible, i.e. global policies can be configured. Everything within the data center is software based and it is easy to scale. A NSX Manager cluster is configured for the management and control plane for NSX within the data center.

1. SD-WAN Edge device in front of the medical device

The first access point or default gateway of the medical devices is the Edge SD-WAN hardware component (see picture 2). It is also possible to place a L2 switch or WLAN access controller behind the Edge if you have more medical devices on a area. A firewall on the SD-WAN Edge handles the initial access security and the connections between different medical devices behind the same SD-WAN Edge.

Picture 2: Network Topology

 

2. Secure transport connection

The SD-WAN Edge will establish a dedicated tunnel to a SD-WAN Edge in the data center. The local area network (LAN) builds the transport network. The VeloCloud Multi-Path Protocol (VCMP) is used to establish an IPSec-secured transport tunnel over port UDP 2426. The SD-WAN Edge in the data center could be also hardware based but the easiest way is to create it on VM-based factor basis. The VM has one WAN interface to establish the IPSec endpoint and one LAN interface to have connection to the medical devices virtual machines or Bare Metal like DICOM or PACS server. If medical devices needs to talk to other medical devices on different SD-WAN Edges the IPSec tunnel can be created also directly without a redirection to the data center Edge.

3. SD-WAN within the Data Center with handover to NSX

If no NSX Overlay (Routing & Switching) is configured within the data center, the existing vSphere Networking implementation (see picture 3) can be used. All NSX security feature can be implemented without a change on the routing and switching area. The VM`s or Bare Metal Server are using the SD-WAN Edge as Gateway. NSX security is independently from the networking infrastructure and the NSX distributed firewall is sitting in front of every VM interface (vnic). This means the traffic can be secured between different VM`s and it doesn`t matter if the virtual machines are part of the same IP range or not.
If NSX Overlay exist a routing connection will be established between the SD-WAN Edges and the NSX Edges (see picture 3). It can be realized via BGP, OSPF or static routing. A NSX gateway firewall can be configured to secure the north-south traffic or the tenant traffic.

Picture 3: NSX Overlay

Picture 3: NSX Overlay

 

4. Operation and Monitoring

The SD-WAN Orchestrator (VCO) is taking care about the configuration and administration of the SD-WAN Edge boxes. The SD-WAN Orchestrator is available as SaaS service or on-premise. The SaaS service is much more easy to implement and administrate. Only “Meta data” will be send from the SD-WAN edge to the Orchestrator (VCO) over a encrypted tunnel for reporting and monitoring purposes. The VCO provides an active monitoring report on its Graphical User Interface using the data received from the Edge devices. Edge devices collect information like source IP address, source MAC address, source client Hostname, network ID, etc. The SD-WAN hardware edges which are placed in front of the medical devices can be activated from a none IT person due to a Zero Touch Provisioning approach (see picture 4). The process is realized in 3 steps. Initial step is that the IT Admin add the Edge to the VCO and creates an activation key. Second step is the device shipment and a email with the activation key from the IT Admin to the onsite person. Last step of the deployment is that the local contact plugs in the device with internet access and activates it with the dedicated key which has been sent via email. Afterwards the IT Admin has access to the Edge and can to further configuration to add the medical device.

Picture 4: Zero Touch Provisioning for the SD-WAN Edges

Picture 4: Zero Touch Provisioning for the SD-WAN Edges

 

The whole solution can be monitored with vRealize Network Insight (vRNI). vRNI can discover flows from physical network devices (routers, switches, firewall), SD-WAN, virtual infrastructure (NSX, vSphere), Containers and Multi-Cloud (AWS, Azure). The tool provides troubleshooting features and helps a lot for security planning. Visibility is also available, see picture 5 with a packet walk from a dedicated source IP to a dedicated destination IP, the path details are very useful.

Picture 5: Path Topology with vRealize Network Insight (vRNI)

Picture 5: Path Topology with vRealize Network Insight (vRNI)

 


Summary

The solution is simple to install and operate. SD-WAN and NSX have a lot of automation in place from default and this makes it very flexible and scalable. The key feature of the solutions are:

SD-WAN

  • Access protection of the medical devices through MAC authentication.

  • No changes are mandatory on the medical device

  • Roaming of the devices from one edge to another, no configuration change is mandatory.

  • Zero Touch Provision for the edges

  • Separation of all ports on an Edge through VLANs.

  • Centralized administration.

  • Use of global policies for simplified administration.

  • Secure transport encryption from the edge to the data center

  • Hardware or Software possible (in the data center)

NSX

  • Protection within the data center with Distributed Firewall and Microsegmentation at the VM, container or baremetal server level.

  • Granular control of access on every VM

  • ESXi hosts do not require an agent

  • Firewall rules will be moved within the vMotion process.

  • Routing instances can be separated

  • NSX IDPS (Intrusion Detection Prevention System) with a distributed function available

  • Central administration and monitoring

  • ESXi hosts at remote sites can be centrally administered with NSX.

There is another brilliant tool named Edge Network Intelligence (formerly Nyansa) which is very interested for the healthcare sector in case of AI-Powered Performance Analytics for Healthcare Networks and Devices. I will create another blog post within the next weeks for this topic.

Read More
Thomas Sauerer Thomas Sauerer

VMware Carbon Black Cloud Container Security with VMC on AWS

Dec. 22, 2020 VMware Carbon Black Cloud announced the General Availability for Container Security. It will protect your applications and data in a Kubernetes environment wherever it lives. In this blogpost I will explain how to add container security to an existing Kubernetes environment. Keep in mind, I am working in a Demo environment.

If you are interested in securing your Workload you can take a look on my previous blog, here.

For more information about K8s or TKG check out the blog from my buddy Alex, here.

What VMware Carbon Black Container Security provides?

In the first release it will provide you a full visibility and basic protection into all your Kubernetes Clusters wherever your clusters live, public cloud or on-prem. The provided visibility helps you to understand and find misconfigurations. You can create and assign policies to your containerized workload. Old Carbon Black fashion, you are also able to customize it and finally, it enables the security team to analyze and control application risks in the developer lifecycle. Enough of it, let’s jump directly into the action.

Let’s open our Carbon Black Cloud console. We can find in “Inventory” the new Tab “Kubernetes”. With “K8s Clusters” à “Add Cluster” we can add our existing K8s Cluster to Carbon Black.

Carbon Black provides us an easy-to-follow step by step guidance.

It’s providing us a kubectl command to install the Operator. Operators are Container which are handle all the communication and taking care about your K8s-Cluster.

01_kube_apply_edited.png

We are already connected with our K8s-Cluster. We paste the kubectl apply command into the console and press ENTER. We can see with “kubectl get pod -A” command, all necessary components will be created. Meanwhile, we can continue in Carbon Black Cloud.

Our next step will be to name our K8s Cluster and add/create a Cluster group. For the Cluster name, we are free to use any name you wish, it just has to be unique. A Cluster group is to organize your multiple Clusters in Carbon Black Console. Cluster groups make it easier to specify resources, apply policies and to manage your Clusters.

02_Cluster_name_group.PNG

Let’s continue, for the next step we need an API token. We have to name it and with “Generate API Key” we are generating one with the needed permissions. After our Key is ready, we can continue to create our Secret for the K8s Components. Copy/paste the command into your kubectl console.

03_API_secret_edited.PNG

To finish the setup, we need to run another apply command, copy paste, and we are done. Now we will wait 2-3 Minutes, until the installed components will report back to the Carbon Black Console. Simple right?

01_kubeconsole_beginning_edit.PNG
01_kubeconsole_edit.PNG
05_Cluster_details_running.PNG

Let’s investigate the details. We can see the status is “running” and green.

Additional information’s like Name, Cluster Group, which Version is used and the last check-in. We also can see all Resources which are deployed, and which Version is used. When we type in our kubectl following command “kubectl get pods -A” we will see the same resources.

octarine-dataplane                  octarine-guardrails-enforcer-66f8cd8cfc-xj8zp                    1/1     Running   0          25m

octarine-dataplane                  octarine-guardrails-state-reporter-766b798f47-dvswg              1/1     Running   0          25m

octarine-dataplane                  octarine-operator-566fd94995-7p28k                               1/1     Running   0          26m


 

Great, next we can create a Scope, a Scope is important when we start to create Policies. To create a Scope, let’s click in the Carbon Black Console on the left side “K8s Scopes” and with one click on “add scope” we can create a scope.

We will define a very simple scope in our case we name it “VMC”. Next, we can decide how we want to group our resources, we have 3 options

I want to choose clusters (all namespaces included)

I want to choose namespaces (regardless of cluster)

I want to choose namespaces in specific cluster

We are choosing the first option, clusters and simply add our Cluster group to it.

6_Scope_creation.PNG

Finally, our next mission will be to create a new Policy for our K8s Cluster. Let’s go to Enforce -> K8s Policies in the Carbon Black Cloud Console. With a click on “add Policy” we can create our first Policy.

As usual we have to name it, I will name it “vmc_basic_hardening”, we select our new created scope, and we are checking “include init containers”.

Next, we can select pre-defined Rules, all pre-defined rules are best practices from the community.

For a basic hardening I would choose following Rules.

Workload Security

                Enforce not root – block:

                                Containers should be prevented from running with a root primary or supplementary GID.

                Allow privilege escalation – block:

                                AllowPrivilegeEscalation controls whether a process can gain more privileges than it parent process.

                Writable file system – block:

                                Allows files to be written to the system, which makes it easier for threats to be introduced and persist in your environment.

                Allow privileged container – block:

                                Runs container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host.

Network

                Node Port – alert:

                                Allow workload to be exposed by a node port.

                Load Balancer – alert:

                                Allow workloads to be exposed by a load balancer

Command

                Exec to container – block:

                                Kubectl exec allows a user to execute a command in a container. Attackers with permissions could run ‘kubectl exec’ to execute malicious code.

 

In the next screen we can see an overview of all rules and how many violations we would get. It would be recommended to disable the rules if you want to apply on a prod env. with a lot of violations or add it to the exceptions. You should speak before enable the rules with the app-owners.

In the end, we confirm the policy and save it.

In the next screen we can see an overview of all rules and how many violations we would get. It would be recommended to disable the rules if you want to apply on a prod env. with a lot of violations or add it to the exceptions. You should speak before enable the rules with the app-owners.

In the end we confirm the policy and save it.

Now we can look on the Workload and what risk level we have. Let’s get back to Inventory -> Kubernetes -> K8s Workloads. You

Let’s take a look on the health of our K8s Cluster, to do so, we go to “harden” -> “K8s Health”. We can filter by scope and see a great overview about our workload, network, operations, and volume.

7_k8s_health.PNG

By clicking on a specific topic like, “allow privileged container” you will see an overview about the Resource name, which cluster it is, which namespace and the resource kind.

8_k8s_health.PNG

Another overview can be found on “K8s Violations” you can see the specific Violation, what Resource is impacted, namespace, cluster scope and which Policy is applied.

To roll it up, it doesn’t matter what platform your K8s Cluster are running, but to run your K8s Cluster in a VMC on AWS Environment bringing a lot of benefits like flexibility and less maintenance. Carbon Black Container Security giving Security Teams and Administrators the visibility into the Container World back.

Read More
Guest User Guest User

Carbon Black App For Splunk

Carbon Black App for Splunk

Carbon Black has put a lot of effort into developing a new, unified app for Splunk that integrates alert and event data from workloads and endpoints into Splunk dashboards. In addition to that, it fully utilizes the Splunk Common Information Model, so we don’t have to deal with event normalization.

There are several good reasons to integrate Carbon Black events and alerts into Splunk. The most interesting use cases for me are data enrichment and correlation. I might get more into detail about this in a future post. Today, I’d like to focus on installation options and the setup.

Deployment Options

There are two options for getting Carbon Black Cloud (CBC) data into Splunk:
1. CBC Event Forwarder
2. CBC Input Add-On

Using the event forwarder (option 1) is the recommended way to get alert data into Splunk and - at the time of writing - it is the only way to get event data ingested. A setup would look like this (on a distributed environment):

CBC-Splunk.png

This option has a low latency, while being reliable, and scalable. Therefore, it should be the preferred way for data ingestion, but since I don’t have an AWS S3 bucket, I’m going to choose the second option.

There is an excellent writeup available in the community and a solid documentation on the developer page if you want to learn more about the first option!

Installing the Carbon Black App for Splunk

Now, let’s get started. First of all, I’ll install the Carbon Black App for Splunk. I have a single on-premise server instance of Splunk, so I’ll just install the “VMware Carbon Black Cloud” app from Splunkbase.

Carbon Black Cloud App

When it comes to distributed Splunk environments, you’ll need to do additional installations on your indexing tier and the heavy forwarder.

Creating API keys

API Documentation

Splunk needs API access to receive data, but also for workflows and Live Response. To create the API keys I switch to the Carbon Black console and navigate to:

Settings > API Access > API Access Levels > Add Access Level

I named it “Splunk API” and assigned the permissions as in the table below:

API Permissions

Once that has been done, it’s time to create two new API keys. The first one is using the custom access level, that’s just been created. The second API key is for Live Response, so the preset “Live Response”- Access Level can be assigned to it.

API Access

Setup Carbon Black App for Splunk

Moving on to the Splunk setup, the first thing is to open the setup page:

Apps > Manage Apps > VMware Carbon Black Cloud “Set up”

I recommend changing the base index to avoid the use of Splunk’s default index “main”. At this point in time, I had already created a new index called “carbonblack”, which I then set as the “VMware CBC Base Index” under the Application Configuration page.

On the same page, you can find a checkbox for data model acceleration. If you’re about to index more than a few GB of Carbon Black data per day, check this box. That’s not the case for me, so I’ll leave it by default and continue with the “API Token Configuration”-Tab.

The API token configuration is pretty straight forward, simply copy & paste the previously created API credentials for alerts and Live Response.

API Token Configuration

Because I’m using the Alerts API instead of the Event Forwarder, the “Alert Ingest Configuration” has to be configured and will allow us to set Loopback and Interval and to select a filter for a minimum alert severity.

To finalize the configuration I assign the API keys as follows:

API Key Assignment
Read More
Joerg Roesch Joerg Roesch

NSX-T Upgrade to Version 3.1

VMware has announced the NSX-T Version 3.1 at 30th of October 2020. I want to describe within this blog post how to update from Version 3.0 to Version 3.1.

There are many new features available with NSX-T 3.1, like:

  • IPS (Intrusion Prevention System)

  • Enhanced Multicast capabilities

  • Federation is useable for Production. With NSX Federation it is possible to manage several sites with a Global Manager Cluster.

  • Support for vSphere Lifecycle Manager (vLCM)

Many other features has been also added to the new release. You can check details on the VMware Release Notes for 3.1: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-31-Release-Notes.html

My LAB has two VM Edge Transport Nodes configured with BGP routing northbound over Tier-0 and dedicated Tier-1 components (see Picture 1). I have a vSphere Management Cluster with two ESXi Hosts which are not prepared for NSX. The NSX-Manager and the NSX Edges are located on the Management Cluster. A Compute Cluster with two ESXi Hosts and one KVM Hypervisor are used for the Workloads.

Picture 1: LAB Logical Design

Picture 1: LAB Logical Design

 

Before you start with the configuration you have to check the Upgrade checklist from the NSX-T upgrade guide. The upgrade guide is available on the VMware documentation centre https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/upgrade/GUID-E04242D7-EF09-4601-8906-3FA77FBB06BD.html and can be download as a pdf file also.

One important point from the checklist is to verify the supported Hypervisor Upgrade Path, i.e. in my LAB the KVM Ubuntu 16.04 was no more supported with NSX-T 3.1.

After you have verified the precheck documentation you can download the upgrade software bundle for NSX-T 3.1 from the VMware Download centre. NSX-T upgrade bundle size is 7,4 GByte. As always for security reason it is recommended to verify the checksum from the software.

Now we want to start the migration. You need first to navigate to the NSX-T Manager GUI - System - Upgrade. The first step is to upload the installation file locally or from a remote location (see Picture 2). For performance reason locally should be prefered. It takes me about 10 minutes to upload the mub file locally.

Picture 2: NSX-T Upgrade

Picture 2: NSX-T Upgrade

When the software is uploaded four steps are mandatory to finalize the upgrade:

  1. Run Pre-checks

  2. Edges

  3. Hosts

  4. Management Nodes

Run Pre-checks

Before we upgrade the components we need to run a pre-check first (see Picture 3). Pre-checks can be run overall or individually for Edges, Hosts and Management Nodes. The pre-checks can be export to a csv file. In my Lab the system has shown some warnings. For the Edges and for the NSX Manager I have received warnings that my root and admin password for the edges will be expired soon. The warning was seen within the NSX-T Manager alarms upfront also but I have not checked this before:-) For the Hosts I got the message that my KVM Ubuntu version is not compatible with NSX-T 3.1.

Picture 3: Upgrade Pre-check

Picture 3: Upgrade Pre-check

 

Edge Upgrade

After the pre-checks have been verified the next step is the Edge Upgrade (See Picture 4). You can work with the serial or parallel options. Serial means to upgrade all the cluster upgrade unit groups consecutively. Parallel option means to upgrade all the Edge upgrade unit group simultaneously. Serial and parallel options can be set across groups and within groups. I have only one group because I have only one Edge Cluster. Within a Edge Cluster the order should be always consecutively. You can check the configuration process under details, there you get log entries from the installation.

Picture 4: Edge upgrade

Picture 4: Edge upgrade

 

Hosts Upgrade

The next steps of the migration is the Hosts Upgrade (see Picture 5). You can realize it also with the parallel or serial options. The maximum limit for a simultaneous upgrade is five host upgrade unit groups and five hosts per group.

Verify that ESXi hosts that are part of a disabled DRS cluster or standalone ESXi hosts are placed in maintenance mode.

For ESXi hosts that are part of a fully enabled DRS cluster, if the host is not in maintenance mode, the upgrade coordinator requests the host to be put in maintenance mode. vSphere DRS migrates the VMs to another host in the same cluster during the upgrade and places the host in maintenance mode.

Picture 5: Edge Upgrade

Picture 5: Edge Upgrade

 

Management Nodes Upgrade

The last step is the NSX-T Manager Upgrade (see Picture 6). For my LAB I have only one NSX-T Manager in place and the process is quit forward.

Picture 6: NSX-T Manager Upgrade

Picture 6: NSX-T Manager Upgrade

 

Summary

The NSX-T 3.1 upgrade is really simple and needs not to be realized in one step. The update is possible with less or without downtime depending on your NSX-T design. My recommendation is to do the Edge and the NSX Manager upgrade in a mainteance window. The Transport Nodes can be updated afterwards in business hours. The serial and parallel option is also very helpful there.

Read More
Guest User Guest User

VMware & Cloud Workload Protection

Hey everyone!

After the announcement and additional sessions at VMworld in september, VMware Carbon Black (The Security Business Unit at VMware) launched their Workload Security offering(s) last week. This is the next strategic step by VMware within their “Intrinsic Security” vision and strategy.

Cloud Workload Security or Cloud Workload Protection aka CWS/CWP/CWPP (2nd P for Platform) as an title is not really something “new” within the security market. When you look at the enterprise security landscape and some of the well-known security vendors, you will easily find out that CWP is the successor of legacy Server Security products and suites. In times of Physical, Virtual, Container, Serverless systems (or just Workloads) and equal components within most common data center and infrastructure environments it’s 1 of the most important topics Security Teams needs to address just right now to secure their IP.

When I was starting to write this blog post I thought about it how deep I should/I need to describe this specific topic in a fairly amount of time without getting too deep easily. But then, I found a blog post from the Carbon Black Team which describes most of my own ideas I wanted to share in a very good way. You may also found out already that I was stealing their article name somehow but it’s the perfect match for it! So, If you are new to all this or just want to refresh your knowledge, please take a look on this blog post by the Carbon Black folks, here -> Defining Cloud Workload Protection

Ok, I hope you’ve enjoyed reading the article, I’ve mentioned before! :)
Now, I think we’re good to go now to diggin’ deep and check out some details around this product launch!

The Cloud Workload Protection (CWP) module & adds new functionalities like Vulnerability Assessment, increased Data Center visibility and an “Agentless” approach to the VMware Carbon Black’s cloud-native Endpoint Security Platform called Carbon Black Cloud (CBC). Which already offers several security components around NGAV, EDR, Audit & Remediation features, using newest anti-malware methods like Real-Time Queries, Machine Learning, AI, Cloud Reputation, Data Enrichment and more against malware, advanced attacks and upcoming threats.

New features added by CWP within the Carbon Black Cloud Management console

New features added by CWP within the Carbon Black Cloud Management console

Carbon Black Cloud Workload itself is a data center security product that protects your workloads running in a virtualized environment. It ensures that security is intrinsic to the virtualization environment by providing a built-in protection for virtual machines. After enabling the Carbon Black functionality in vCenter Server, you can view the inventory protected by Carbon Black Cloud Workload and view the inventory and risk assessment dashboard provided by Carbon Black Cloud Workload Plug-in.

vmware_vsphere_cwp.png

You can now easily monitor and protect the data center workloads from the Carbon Black Cloud console. The Carbon Black Cloud Workload Plug-in provides deep visibility into your data center inventory and end-to-end life-cycle management for the components.

Carbon Black Cloud Workload consists of a few key components that interact with each other.

Carbon Black Cloud Workload consists of a few key components that interact with each other.

You must first deploy an on-premises OVF/OVA template for the Carbon Black Cloud Workload appliance that connects the Carbon Black Cloud to the vCenter Server through a registration process. After the registration is complete, the Carbon Black Cloud Workload appliance deploys the Carbon Black Cloud Workload Plug-in and collects the inventory from the vCenter Server. The collected inventory data is displayed on the plug-in Inventory tab and is also communicated to the Carbon Black Cloud console.

So far, so good. Let’s do some quick check now, on the available information & ressources! Here is a short overview:

the good thing:
All VMware vSphere customers have access to an extended trial version of VMware Carbon Black Cloud Workload Essentials. You can sign up for it HERE.
Check out the datasheet to get some details about it HERE. There is also a video available HERE: Introduction to VMware Carbon Black Cloud Workload (direct video link)

the not so good thing:
you need to be very quick to sign up for the free and extended Trail phase ;)
Some copy / paste content from the Trial FAQ (see full details at the link below) to light up the darkness:

"...

  • What is the deadline for sign up for the free trial? December 1, 2020.

  • After I sign up, how soon will I get access to the Workload Trial? The Workload Product Trial will be available November 2020. Eligible participants will receive email communications to let them know when the product is launched and how to download.

  • How long does the free trial last? The trial will be available through April 30, 2021.

..."

There are tons of information available already, but I was searching for the (hopefully) most useful content, which can be found below:

Additional Content & Details:

Technical Details:


Enough said! Now it’s time to try it out! :)

Read More
Security, Network, VMware Joerg Roesch Security, Network, VMware Joerg Roesch

VMworld Network & Security Sessions 2020

VMworld 2020 will be taken place this year remotely from 29th of September 2020 until 1st of October 2020. I provide recommendations within this blog post about some deep dive sessions related to Network & Security sessions. I have excluded general keynotes, certifications and Hands-on-Labs sessions from my list. I have focused on none 100 level sessions, only in case of new topics I have done some exceptions.

Pricing

The big advantage of a remote event is that everyone can join without any travelling, big disadvantage is indeed the social engineering with some drinks:-) Everyone can register for the general pass without any costs. There is also the possibility to order a premier pass which includes additional benefits like more sessions, discussions with VMware engineers, 1 to 1 expert sessions, certification discount, etc.

VMworld Session Recommendations

Now I come to my session recommendations which are based on my experience from the last years and about topics which are interesting from Network and Security point of view. But first I have to say that every VMworld sessions is worth to join and for me it is the most important source to get technical updates, innovation and training.

NSX Sessions - Infrastructure related

  • Large-Scale Design with NSX-T - Enterprise and Service Providers [VCNC1838]

  • Enhancing the Small and Medium Data Center Design Through NSX Data Center [VCNC1400]

  • Deploying VMware NSX-T in Traditional Data Center Infrastructure [VCNC1766]

  • Logical Routing in NSX-T [VCNC1264]

  • NSX on vSphere Distributed Switch: Update on NSX-T Switching [VCNC1197]

  • NSX-T Performance: Deep Dive [VCNC1149]

  • Demystifying the NSX-T Data Center Control Plan [VCNC1164]

  • NSX Federation: Everything About Network and Security for Multisites [VCNC1178]

  • NSX-T Deep dive: APIs Built for Automation [VCNC1417]

  • The Future of Networking with VMware NSX [VCNC1555]

NSX Sessions - Operation and Monitoring related

  • NSX-T Operations and Troubleshooting [VCNC1380]

  • Deep Dive: Troubleshooting Applications Without TCPdump [VCNC1920]

  • Automating vRealize Network Insight [VCNC1710]

  • Why vRealize Network Insight Is the Must-Have Tool for Network Monitoring [ISNS1285]

  • Discover, Optimize and Troubleshoot Infrastructure Network Connectivity [HCMB1376]

NSX Sessions - NSX V2T Migration related

  • Migration from NSX Data Center for vSphere to NSX-T [VCNC1150]

  • NSX Data Center for vSphere to NSX-T Migration: Real-World Experience [VCNC1590]

NSX Sessions - Advanced Load Balancer (AVI) related

  • How VMware IT Solved Load Balancer Problems with NSX Advanced Load Balancer [ISNS1028]

  • Active-Active SDDC with NSX Advanced Load Balancer Solutions [VCNC2043]

  • Load Balancer Self-Service: Automation with ServiceNow and Ansible [VCNC1390]

NSX Sessions - Container related

  • NSX-T Container Networking Deep Dive [VCNC1163]

  • Introduction to Networking in vSphere with Tanzu [VCNC1184]

  • How to Get Started with VMware Container Networking with Antrea [VCNC1553]

  • Introduction to Tanzu Service Mesh [MAP1231]

  • Connect and Secure Your Applications Through Tanzu Service Mesh [MAP2081]

  • Forging a Path to Continuous, Risk-Based Security with Tanzu Service Mesh [ISCS1917]

NSX Security Sessions

  • IDS/IPS at the Granularity of a Workload and the Scale of the SDDC with NSX [ISNS1931]

  • Demystifying the NSX-T Data Center Distributed Firewall [ISNS1141]

  • NSX Intelligence: Visibility and Security for the Modern Data Center [ISNS2496]

  • Micro-Segmentation and Visibility at Scale: Secure an Entire Private Cloud [ISNS1144]

  • Best Practices for Securing Web Applications with Intrinsic Protection [ISNS1441]

  • Network Security: Why Visibility and Analytics Matter [ISNS1686]

  • Protecting East-West Traffic with Distributed Firewalling and Advanced Threat Analytics [ISNS1235]

Network & Security and Cloud

  • NSX for Public Cloud Workloads and Service [VCNC1168]

  • Cloud Infrastructure & Workload Security: VMwareSecure State & Carbon Black [ISWL2072 + 2754]

  • Investigate and Detect Cloud Vulnerabilites with VMware Secure State [ISCS1973]

  • Service-Defined Firewall Multi-Cloud Security Design [ISCS1030]

  • Azure VMware Solutions: Networking and Security Design & NSX-T [HCPS1576]

  • VMware Cloud on AWS: Networking Deep Dive and Emerging Capabilities [HCP1255]

  • NSX-T: Consistent Networking & Security in Hyperscale Cloud Providers [VCNC1425]

Intrinsic Security

  • Cloud Delivered Enterprise Remote Access and Zero Trust [ISNS2647]

  • Flexibly SOAR Toward API Functionality With Carbon Black [ISWS1095]

  • Remote Work Is Here to Stay: How Can IT Support the New Normal [DWDE2485]

  • Mapping Your Network Security Controls to MITRE ATT&CK [ISNS2793]

  • Transform Your Security to a Zero Trust Model [ISWL2796]

Intrinsic Security - VMware Carbon Black Cloud EDR

  • Become a Threat Hunter [ISWS2604]

  • Endpoint Detection & Response for IT Professionals [ISWS2690 + 2653]

  • VMware Carbon Black Audit and Remediation: The New Yes to the Old No [ISWS1241]

Intrinsic Security - VMware Carbon Black Workload

  • Intro to VMware Carbon Black Cloud Workload [ISWL2616]

  • Comprehensive Workload Security: vSphere, NSX, and Carbon Black Cloud [ISWL2618]

  • Vulnerability Management for Workloads [ISWL2617 + 2755]

Intrinsic Security - VMware Carbon Black Endpoint

  • Securing Your Virtual Desktop with VMware Horizon and VMware Carbon Black [ISWS1786]

  • VMware Security: VMware Carbon Black Cloud and Workspace ONE Intelligence [ISWS1074]

SD-WAN - VeloCloud

  • SD-WAN Sneak Peek: What`s New Now and into the Future [VCNE2345]

  • Users Need Their Apps: How SD-WAN Cloud VPN Makes That Connection [VCNE2350]

  • VMware Cloud and VMware SD-WAN: Solutions Working in Harmony [VCNE2347]

  • Seeing Is Believing: AIOps, Monitoring and Intelligence for WAN and LAN [VCNE2384]

  • Why vRealize Network Insight Is the Must-Have Tool for Network Monitoring [ISNS1285]

  • VMware SD-WAN by VeloCloud, NSX, vRealize Network Insight Cloud [HCMB1485]

Summary

There are a lot interesting VMworld sessions, also for many other topics like Cloud, End User Computing, vSphere, Cloud-Native Apps, etc. Do not worry if you missed some presentation, the recording will be provided usually from my colleague William Lam on GitHub.

Here you can find the slides and the recording from VMworld 2019 in US and EMEA:

https://github.com/lamw/vmworld2019-session-urls/blob/master/vmworld-us-playback-urls.md

https://github.com/lamw/vmworld2019-session-urls/blob/054b036e35d5f2c2426c5167c62273ed9e4715b3/vmworld-eu-playback-urls.md

Feel free to add comments below if you see other mandatory sessions within the Network & Security area.

Read More