Quantcast
Channel: The Ravello Blog
Viewing all 333 articles
Browse latest View live

Install Windows 7 on AWS EC2 from ISO for client testing

$
0
0

windows7

This blog focuses on Win7 on AWS but note that you can also run WinXP, Windows 8, Windows 10 etc by following similar steps.

If you have Active Directory in your environment and you’re looking for cloud options, or as you’re thinking about how you want to structure your client testing lab, you will most likely need Windows machines for your labs. Here’s a simple how-to guide to run Windows 7 on AWS EC2 very quickly, for any Windows use case. Follow this guide to install Windows 7 from ISO, install VMware tools, and you’re good to go.

As you probably know it’s not possible to install and run Windows 7 natively on AWS. Ravello makes it possible using nested virtualization. First, we’re going to log into Ravello, create a new application and find the provided empty VM in our Ravello library and drag and drop it onto the canvas.

empty-VM

Next we need to find our ISO. An initial step would be to upload your Win 7 ISO to Ravello. I’m not going to cover it here, but here’s a very quick guide to upload your ISO. Assuming we have the ISO - we can go ahead and configure the disk to first boot from CD-ROM. We’ll browse through the library and choose the relevant Windows 7 ISO.

win7-iso

Now let’s publish the application. By the way, I found that you should use at least 2 vCPU on the VM (preferably 4).

publish-app

Once the VM is published we can open up the console and go through the Win 7 installation.

install-windows

Basically, that’s it. You have a Windows VM running in the cloud (in our case I chose to publish on AWS Virginia region, but you can publish on any of the AWS and Google cloud regions). You can use the console now, and if the mouse behavior feels not quite right, you should install VMware tools.

So first - we’ll go back to the application canvas in Ravello and now add a disk to our VM, this time - browse in the library to your VMware Tools ISO (again after uploading the VMware tools ISO to our library). Once we update the VM in the canvas, the VM will restart. Now if we look at the device manager in the VM, we’ll see that we have the VMware Pointing Device.

VMware-tools

Now to finalize and optimize the VM’s performance, it is recommended to use para-virtualized devices for network and as a disk controller - so in the network configuration, we’ll use VMXNet3. For the disk - we’ll add a disk using PVSCSI controller.

netwok-config disk-config

The VM will restart again, and we will have a Windows 7 machine running on AWS. I hope this guide will be helpful to you when you set up your Windows machine. You can start a free trial and walk through these steps.

This is a technology blog. If you want to use Ravello to run Windows, you must comply with Microsoft's licensing policies and requirements. Please consult with your Microsoft representative.

The post Install Windows 7 on AWS EC2 from ISO for client testing appeared first on The Ravello Blog.


Run an NFV Architecture (OPNFV) on AWS and Google – Brahmaputra Edition

$
0
0

opnfv-flower

Author:
Iben Rodriguez
Iben is Cloud consulting and Virtualization Architect. He is trained in agile, ITIL, SOX, PCI-DSS, ISO27000. He is working to shift SDN testing functions out of the test lab and closer to the developers and operators.

Setup and operate your own OPNFV Architecture for dev, test, training using Ravello Systems on AWS and Google Cloud.

What is OPNFV Architecture

The Genesis project of OPNFV defines a number of core technologies that are part of the open source NFV platform. These include:

  • Openstack with various core projects such as Horizon, Nova, Neutron, Glance, Cinder, etc...
  • Open vSwitch
  • Integration with an approved network controller

The Brahmaputra release of OPNFV from February 2016 includes support for 4 different “bare-metal” installers for OpenStack integrated with 4 different network controller options.

These installers deploy OpenStack per the Genesis guidelines on hardware you provide in a network test lab or “in the cloud” with various approved network controller options such as:

There are a number of feature and test projects that use these environments as a platform after it’s built.

If you’re new to OPNFV and DevOps there can be a pretty steep learning curve to get up to speed with all the components needed to get a working platform going and maintain it.

Organizations wishing to participate in the development and testing of OPNFV Architecture should follow the guidelines established by the Pharos project which specify 6 physical servers connected to a Jenkins build server that uses scripts to issue commands to a Jump Box machine. This jump box then installs an Operating System onto the target nodes and then configures the OpenStack, network controller, storage, and compute functions. Preparing the environment and running the scripts can take a few hours even for an automated script and that doesn’t include all the time spent planning and debugging a new installation.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

These blueprints we provide are based on the OPNFV Copper Academy work done by Bryan Sullivan from AT&T which provides a design for a lighter weight 4 node design that can run virtual either on premise or hosted “in the cloud”.

Here’s a screenshot of the network layout for the blueprints covered by this blog:

network-layout

Setup and run OPNFV Architecture lab on AWS and Google Cloud

This series of posts and blueprints is intended to make it easier (and cheaper) to set up an OPNFV test environment. All you will need to get started is a web browser and an account with Ravello Systems. The following 5 blueprint configurations are being made available and shared on Ravello Repo and will be kept up to date on a regular basis as new OPNFV releases are available:

  1. Builder blueprint with a MaaS controller pre-installed and three empty nodes for bootstrap, controller, and compute. All ready for configuration and OPNFV build out with your SDN choice. It will take a few hours to deploy this blueprint, configure maas, deploy, and produce a working OPNFV customize exactly to your parameters. This is NOT a self-contained blueprint and you must provide ssh keys, github, and complicated file editing. Intended for the more advanced developer.
  2. Contrail blueprint including an already deployed OPNFV with OpenStack Kilo and Juniper Contrail SDN Controller. Spin up an app from this blueprint and in 20 minutes you will have a working OpenStack environment. Beginner level.
  3. ODL blueprint including an already deployed OPNFV with OpenStack Liberty and Open Daylight SDN controller.
  4. ONOS blueprint including an already deployed OPNFV with OpenStack Liberty and ON Labs: ONOS SDN controller
  5. NOSDN blueprint including an already deployed OPNFV with OpenStack Tip/Master and no SDN controller

What is Ravello Systems

Ravello’s HVX nested hypervisor with overlay networking is delivered as a software as a service. You can run any virtual environment with any networking topology in AWS and Google Cloud without modifications. Using Ravello blueprints, you automatically provision fully isolated multi-tier environments.

Getting Started with the OPNFV Academy Blueprints

Here are the general steps needed to get started with these blueprints and get up and running quickly with one of the pre-built configurations we have provided. See the readme from the github repo for more detailed steps.

  1. Open a new web browser window and Add this blueprint to your library
  2. Create an application from the blueprint. Check the Network topology as follows:
    application-blueprint
  3. Start the application - wait 10 to 15 minutes for the machines to spin up and be ready
    start-application
  4. Once the VM is started you can find the IP address for the MAAS server in the Ravello dashboard.
    vm-started
  5. Perform a Basic Functional Test to ensure the admin console for each function is working.
    1. open a new web browser window to the IP address of the MAAS server.maas-server
    2. open new web browser windows with the MAAS server IP address followed by the port for the function you wish to use (insert screenshot)
      1. Juniper OpenContrailOpenContrail
      2. OpenStack HorizonOpenStack-Horizon
      3. Login to JuJu GUI admin console to see deployed model corresponding to the blueprint. This screenshot shows the Juju bundle (collection of 32 charms) for OpenStack with OpenContrail SDN.Juju
      4. Open Daylight DLUX consoleOpen-Daylight
      5. MaaS admin consolemaas-admin

Next steps

After this the possibilities are endless.

  • Be sure you join the GitHub Repo to post any issues or suggestions you have.
  • You can become familiar with the various tools such as Juju, MaaS from Canonical.
  • Try out the other OPNFV blueprints from Ravello Repo.
  • These blueprints are small non-HA versions - make your own blueprint with an HA (High Availability) deployment.
  • Sign up for an account with the Linux Foundation that will give you access to update the wiki, post patches to Gerrit, update JIRA issues, and use Jenkins.
  • Modify and create your own blueprints on Ravello to share them with others.
  • A REST API and Python SDK are also available allowing automation of Ravello workloads as part of the product lifecycle for your company.

The post Run an NFV Architecture (OPNFV) on AWS and Google – Brahmaputra Edition appeared first on The Ravello Blog.

NFV Orchestration: Increase service velocity with NFV (part 2 of 4 post series)

$
0
0

NFV-orchestration

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

In this second part of a 4-post series around NFV orchestration we detail how NFV (Network Function Virtualization) can help alleviate multi-tenancy and network mobility challenges and increase service velocity (pace at which services can be rolled out) across enterprises and service providers.

Increasing the service velocity with NFV

In traditional networks, the time to deploy new services is very long. Physical network functions are contained in physical boxes and appliances. Traffic requiring service treatment must be physically wired to the box. In a multi tenant environment, deploying new services for new tenants was difficult and time consuming. It took a long time for product innovations and affected new service and product testing. There is a need to evolve the network and make it more cloud compatible. Networks need to move away from manual provisioning and respond to service insertion in an automated fashion. Service insertion should take minutes and not weeks.

How do we get the network to adapt to these new trends? One way to evolve your network is to employ network function virtualization. With NFV, ordering a new service can be done in seconds. For example, a consumer can request via a catalogue a number of tiers and certain traffic flows permitted between tiers. The network is automatically provisioned without human intervention.

NFV eliminates human intervention and drives policy automation.

NFV and SDN compliment each other but are used to satisfy separate functions. SDN is used to program network flow, while NFV is used to program network functions.

Network Function Virtualization decouples network functions from proprietary hardware and runs them in software. It employs virtualization techniques to manage network services in software as opposed to running these functions on static hardware devices. NFV gives the illusion to a tenant, the perception that they have an logically isolated network for themselves. It’s basically a software module running on x86 hardware. Proprietary hardware is cheap and you are paying for the software & maintenance costs. Why can’t you run network services on Intel x86?

The building blocks for NFV are Virtual Network Functions (VNF’s). VNF handle specific network function such as firewalling, intrusion protection, load balancing and caching. They run in one or more virtual machines and can be service chained by a SDN controller or some other mechanism. Once the network services are virtualized they can be dynamically chained to a required sequence by an SDN Controller, for example Contrail SDN Controller. The chain is usually carried out by creating dynamic tunnels between endpoints and routing traffic through the network function by changing the next hop. The chaining technology is not limited to the control of an SDN Controller. Locator/ID Separation Protocol (LISP) can also be used to implement service chaining with its encapsulation / decapsulation functions. Once the network functions are in place, LISP can be used to set up the encapsulations paths.

Please read Part 3 of this article where I go into how network automation, service chaining can help increase services velocity (roll out of services) in enterprises and service providers.

A special thanks to Jakub Pavlik and his team at tcpcloud - a leading private cloud builder - for collaborating on this post.

The post NFV Orchestration: Increase service velocity with NFV (part 2 of 4 post series) appeared first on The Ravello Blog.

NFV Orchestration: Networking Automation using Juniper Contrail (part 3 of 4 post series)

$
0
0

NFV-orchestration

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

This article details NFV orchestration using public cloud NFVI as a 4 part series. This post in the series looks into how service orchestration using Juniper Contrail can help assist with multi-tenancy and workload mobility, and also increase service velocity through NFV orchestration and service chaining.

Juniper Contrail for network automation

Contrail uses both an SDN controller and vRouter instance to influence VNF. Juniper view their entire platform as a cloud network automation solution, providing you a virtual network. Virtual networks reside on top of physically interconnect devices, with a lot of the intelligence getting pushed to the edge of the network with tunnel creation. The role of the network is now closer to user groups. This is the essence of cloud enabling multitenancy that couldn't be done properly with VLANs and other traditional networking mechanisms.

Contrail exposes API to orchestration platforms to receive service creation commands and provision new network services. From these commands it spins up virtual machines of the required network functions, for example NAT or IPS.

Juniper employs standardised protocols, BGP and MPLS/VPN. Extremely robust and mature implementations. Why reinvent the wheel when we are proven technologies that work?
Supporting both virtual and physical resources, Contrail also leverage Open Source cloud orchestration platform OpenStack and becomes a plugin for the Neutron project of OpenStack. Openstack and Contrail are fully integrated, when you create a network in Contrail it shows up in OpenStack. Contrail has also extended to use the OpenStack Heat infrastructure to deploy the networking constructs.

Benefits of Juniper’s Contrail Network Abstraction

Juniper's Contrail allows you to consume the network in an abstract way. What this means is that you can specify the networking requirement in a simple manner to the orchestrator. The abstract definitions are then handed to the controller. The controller acts as a compiler and takes these abstract definitions and converts them into low level constructs. Low level constructs could include a routing instance or ACL that are required to implement the topology that you specify in the orchestration systems.

The vRouter carries out distributed forwarding plane and is installed in a hypervisor as a kernel module of every x86 compute nodes. It extends the IP networks into a software layer. The reason the vRouter is installed in all x86 compute nodes is because VM’s can be spun up in any x86 compute node. So if VM-A gets spun up on compute A, we have the forwarding capability on that node. It does not augment the Linux Bridge or the OVS, it is a complete replacement.

The intelligence of the network is now with the controller that programs the local vRouter kernel modules. The fabric of the network, be it a leaf and spine or some other physical architecture only needs to provide end-to-end IP connectivity between the endpoints. It doesn't need to carry out any intelligence, policies or service decision making. All that is taking care of by the contrail controller that pushes rules down to the vRouters sitting at the edge of the network.

Now that the service is virtualization it can be easily scaled with additional VM’s as traffic volume grows. Elastic and dynamic networks have the capability to offer on-demand network services. For example, you have an requirement to restrict access to certain sites during working hours. NFV enables you to order a firewall service via a service catalogue. The firewall gets spun up and properly service chained between the networks. Once you no longer require the firewall service, it is deactivated immediately and the firewall instance is spun down. Any resources utilized by the firewall instance are released. The entire process enables elastic and on demand service insertion.

Service insertion is no longer correlated to physical appliance deployment, which in the past severely restricted product and service innovations. Service provider can try out new services on demand, increasing the time to market.

Service chaining increases services rollout velocity

For example, we have an application stack with multiple tiers. The front end implements web functionality, middle tier implements caching functionality, and a backend that serves as the database tier. You require 3 networks for the tiers and VM in each of these implement this functionality. You attach a simple type of security policy, so only HTTP between front end to caching is permitted and before sending to the database tier it get scrubbed to the virtual firewall.

This requirement is entered into the orchestrations system and in terms of the compute orchestration system (Openstack - Nova) the VM’s (matched per tier) are launch on the corresponding x86 compute modes. For VM’s that are in the same network but on different hosts, the network is extended by the vRouters by establishing a VPN tunnel. A mesh of tunnels can be created to whatever hosts are needed. The vRouter creates a routing instance on each host for each network and enforces the security policies. All the security policies are implemented by the local vRouters sitting in the kernel.

Security policies assigned to tenants are no longer made in your physical network, no need for the box by box mentality, it is contained in the vRouter, which is distributed throughout the network.

Nothing in the physical environment needs to be changed. The controller programs the flows. For example, if VM-A goes to VM-B send the packet to the virtual load balancing device and then to the virtual firewall. All routers are then programed to look for this match and if it is met, it will send to the correct next hop for additional servicing. This is the essence of contrail service chaining. The ability to specify a list of ordered segments you would like the traffic to take. The controller and the vRouter take care of making sure the stream of traffic follows the appropriate chain. For example, for HTTP traffic send it through a Firewall and a Load Balancer but for telnet traffic just send to the Firewall.

The post NFV Orchestration: Networking Automation using Juniper Contrail (part 3 of 4 post series) appeared first on The Ravello Blog.

How To Install And Run Windows 8 From ISO on AWS EC2

$
0
0

Windows_8_Logo

Although AWS does not natively allow you to install your own Win7, Win8 or WinXP on AMI by attaching an ISO, it’s fairly easy to do this using Ravello’s nested virtualization. This is particularly useful for client testing on AWS.

This step by step guide focuses on how to run Windows 8 on AWS. You can also refer to our other windows guides here.

Steps:

  1. Create an account and login to Ravello, then click “Create Application” to create a new application in Ravello
  2. Drag an empty VM from the library on to the Ravello canvas.
  3. Click on “Import VM” and follow the prompts to upload your Win8 ISO
  4. Click on “Publish” -->choose Performance Optimized, and select AWS region
  5. After you publish, click on the console button to complete your installation in the console
  6. Save your image to the library so that you can skip these steps when you want to re-use it later. Now you can also spin up a farm with hundreds of Windows clients using a single API call.

Performance tuning tips:

  1. Give your empty VM at least 2 vCPU
  2. Install VMware tools to eliminate any mouse sync issues in the console
  3. Use para virtualized devices such as vmxnet3 for networking and PVSCSI for disks

Screenshots:

win-8-app

upload-tool

win-8-vm

publish-win-8

win-8-console

This is a technology blog. If you want to use Ravello to run Windows, you must comply with Microsoft's licensing policies and requirements. Please consult with your Microsoft representative.

The post How To Install And Run Windows 8 From ISO on AWS EC2 appeared first on The Ravello Blog.

OPNFV Testing on Cloud

$
0
0

opnfv-flower

Author:
Brian Castelli
Brian Castelli is a software developer with Spirent creating test methodologies for today's networks. His current focus is on SDN and NFV.

The OPNFV project is dedicated to delivering a standard reference architecture for the deployment of carrier-grade Network Function Virtualization (NFV) environments. Testing is critical to the success of the project and to the success of real-world deployments, as evidenced by the many test-related sub-projects of OPNFV. One of those subprojects, VSPERF, is dedicated to benchmarking one of the key NFV components: The virtual switch.

The VSPERF community has developed a test harness that is integrated with several test tools. When deployed, virtual switches can be tested in a stand-alone, bare-metal environment. Standard benchmarking tests, such as RFC2544, are supported today, with more tests in the pipeline.

Running VSPERF for optimum performance and consistency requires dedicated hardware. This is acceptable for running the tests themselves, but it increases the cost of test development. Developers need a lower-cost environment where they can rapidly create and prove the functionality of tests that can then be moved to hardware for final testing. The hardware requirement also poses a problem for product marketing and sales. Those teams need a way to demo VSPERF test capabilities to potential customers without lugging around hardware.

The Ravello Systems environment presents a solution to the problems faced by these two groups. By virtualizing the test environment and taking advantage of Ravello’s Blueprint support, we were able to:

  • Virtualize the entire VSPERF test environment
  • Create a blueprint to enable rapid deployment
  • Create multiple, low-cost, access-anywhere development environments for engineers in various geographic locations
  • Create on-demand demo environments for customer visits and trade shows
  • Only pay for what we use

We started with virtual versions of our standard products.
nfv-testing

  • A License Server is required to handle licensing of test ports
  • A Lab Server is required to handle REST API communication from the VSPERF test harness
  • STCv is our virtualized test ports. In this case, we used a single STCv instance with two test ports. This configuration gives our virtual test ports the best performance and consistency.

The 10.0.0.0 subnet was created for management. 11.0.0.0 and 12.0.0.0 were created for dataplane traffic.

Next we instantiated the VSPERF test harness host, connecting it to the appropriate networks.

All along the way, Ravello’s interface gave us the options we needed to configure.

  • DNS names. Each node was given a fully-qualified domain name that remained constant even though underlying IP addresses might change from one power-up to the next. This gave us the ability to script and configure without worry about changes.
  • CPU, memory, storage. Each node could have as much or as little resources as necessary. How many times have we been frustrated in the lab to find out that we need more cores or a larger hard drive? This is a configuration change instead of a purchase order.
  • Publish optimizes for cost or performance. We can minimize cost or publish to get better performance.
  • Timeout. By default, applications will timeout after a configurable period of time. This saves us from unnecessarily wracking up charges by accidentally running over the weekend. Of course, Ravello also supports “forever” operation for nodes that we really do want to run all weekend.

Overall, Ravello Systems is good value and flexibility for the needs described here.

Brian’s current role is to support the development of NFV test methodologies and to support Spirent’s participation in the OPNFV project.

The post OPNFV Testing on Cloud appeared first on The Ravello Blog.

Kali Linux penetration testing labs

$
0
0

imgres

As increasing number of enterprises turn to Ravello to replicate their environments on public cloud for security testing, we have seen a growing interest from security community to run security labs on public cloud – to 'sharpen their saw', hone their security testing and ethical hacking skills. Kali Linux is one of the most popular tools used for this purpose.

We are proud to announce the availability of Kali Linux based penetrating testing lab on Ravello that security enthusiasts can access with one click. Developed by Kali Linux for the security community, this lab contains Kali Linux, bWAPP (bee-box) and Metasploitable vulnerable VM.

Get it on Repo

REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

 

[caption id="attachment_6395" align="aligncenter" width="328"]Screen Shot 2016-02-05 at 8.31.46 AM Kali Linux Lab on Ravello[/caption]

Interested in running this Kali Linux lab? Just open a Ravello account, and add this blueprint from Ravello Repo. If you want help in setting up your very own customized security or penetration testing lab with your favorite tools, just drop us a line. We are standing by to help.

 

The post Kali Linux penetration testing labs appeared first on The Ravello Blog.

Installing and configuring Trend Micro Deep Security, vSphere and NSX environment on AWS and Google Cloud

$
0
0

trend-micro

Trend Micro Deep Security, a security suite providing antivirus, intrusion prevention, firewalling, url filtering and file integrity monitoring for both virtual and physical systems. For virtualized systems, Deep Security can provide you with both client-based as well as clientless solutions providing a single management solution for Virtual Desktops, servers as well as physical systems. In addition, Deep Security can integrate with VMware's NSX, providing automated network firewalling and security options whenever deep security detects malicious activity on your systems.

In this blogpost, we'll show how to setup a lab environment for Trend Micro Deep Security using AWS and Google Cloud capacity for both agentless as well as agent-based protection and the integration with VMware vSphere.

If you are a reseller and/or system integrator, you can build Deep Security labs like these on public cloud and use them for your sales demo, proof of concepts(POCs) and training environments. You pay hourly based on the size of your lab and only when you are using it.
You can setup an environment with Trend Micro Deep Security appliance, other servers and client systems within Ravello Systems interface, test and run it on AWS or GCE and then save it as your demo/POC/training blueprint. Then, when you need to spin multiple Trend Micro Deep Security environments across the globe for your team, you can spin them up on AWS or Google Cloud using the already saved blueprint within minutes.

Preparing your environment

For this blog, we've prepared the following environment in Ravello Systems.

  1. VMware Horizon view connection server (optional)
  2. Trend Micro Deep Security Manager running on Windows 2012R2
  3. Domain Controller
  4. 2 ESXi Host servers
  5. openfiler storage server (optional)
  6. Center server running on Windows 2012R2

image09

Since we'll mainly focus on the setup of deep security, we'll not focus too much on the vSphere setup. Click on the link for a brief overview how to configure and deploy VMware vSphere in Ravello. In addition, here's a detailed guide for vCenter.

Installation of Deep Security Manager

The Window hosts is added to the testlab.local domain as dsm.testlab.local. After this the latest Windows version of deep security manager is downloaded from downloadcenter.trendmicro.com.

image01

image27

Choose your installation language. Click ok.

image23

Pre installation check is noticing the VM is not configured with enough resources to run a production environment, but as this is a demostration purpose this shouldn't be a problem.

image10

Click Next.

image26

Read the license agreement and click the accept radio button when you agree. Click Next.

The Upgrade Verification runs to check if there is a previous version installed. In this demo environment we are starting with a new installation.

image21

Change the location accordingly. Click Next.

image06

Fill in the required external database hostnames, database instance and so on. For this demo purpose I’m using the embedded installation. Note: Do not choose the embedded database for a production environment, as the installer will tell you also...

image02

 

Enter the Activation code. For this lab we’ll be using a trial license which can be acquired through this link.

image16

Hostnames, IP adresses and port names. Change only when your environment somehow uses the ports required. Click Next.

image28

Configure your administrator account and click next.

image19

In this step, we’ll configure our security updates. This creates a scheduled tasks for security update (and update your procedures that these are scheduled tasks). For this demo environment we do not use a proxy server to connect to the Trend Micro site for the security updates.

image04

Next, we’ll configure the same scheduled task for our software updates.

image29

Enable a Relay agent for distribution of definitions and updates to the protected agents and virtual appliances in your lab environment. In this case we’ll install the relay on the management server, but in a production environment it’s recommended to install this on one or multiple separate servers.

image22

Since this is a demo environment we’ll disable the smart feedback.

image07

Before starting the installation, you are shown a summary with all the installation. Confirm that everything is configured correctly and select “install”.

image13

Once the Installation is finished, allow for the DSM console to open and click finish. After logging in to the deep security manager, we should be shown the following dashboard:

image00

Deep Security Manager Configuration

First we’ll add the vCenter we installed earlier for this lab. Open the “computers” tab, then rightclick “computers” (in the leftmost menu) and select “add VMware vCenter.

image25

Enter the configuration details of your vCenter server, then click next. Accept the vCenter server SSL certificate and select finish.

image17

image14

Now that you’ve configured the vCenter configuration of Deep Security, it’s time to deploy the virtual appliances used for the agentless protection. Since we are using vSphere 6 with Trend Micro Deep security 9.6, we will not deploy the filter driver. This something to watch out for if you are reading other blog posts or if you are familiar with older versions of deep security and vsphere.

First, we’ll need to import the vSphere security appliance.Download the 9.5 virtual appliance from this link.

Once the download has completed, open “Administration”, then drill down to updates ->software -> local. Import the file you just downloaded.

After importing the package, open your vCenter in the computers view, then drill down to “hosts and clusters”. right click the host you want to protect and select “actions -> Deploy agentless security”.

image24

Enter any name for the appliance and select the details of deployment.

image05

Next, enter your network configuration. If you are using DHCP you can leave that enabled, for this lab we’re using static address assignment so we’ll configure the appliance with the correct network settings.

image12

Provision the appliance as either thick or thin (your preference), and wait for the deployment to finish. Once the deployment finishes, you can continue with the activation of the Virtual appliance. Afterwards, the apliance should show up in the list of computers, and you should be able to activate virtual machines without installing the agent.

Agent based protection

First, we’ll have to add our active directory to the deep security manager. While you can also protect systems without active directory, this makes the deployment significantly easier.

Go back to “Computers”, then right click “computers” in the left menu. Select “Add Directory” and enter your AD details.

image11

Next, we’ll create a scheduled task to synchronize the directory.

image08

image18

image03

Next we’ll have to import the agent. Open “Administration”, then drill down to updates ->software -> download center. Search for “Windows”. Then, select the latest agent version, right click and select “import”. Once the import is done, Select “Support” in the top right part of the management console, then select “Deployment scripts”. Select your platform and copy the script.

After adding our active directory, we should be able to see the machines joined to the domain. Verify that you can see your machines by opening the computers tab and browsing through your list of computers.

Log in to the machine you wish to protect and run the script, which will install the agent. Normally in a production environment you’d either deploy the agent through a management tool or preinstall it in the image, but for now manual installation will suffice. After the agent has been installed, go back to the deep security manager and open the computers view. Right click one of the machines you wish to protect, and select actions -> activate/reactivate.

image15

After a minute or so, the status of your machine should change to “managed (Online)” and your virtual machine will be protected by Trend Micro Deep Security. By opening the details of a protected computer (or creating a policy) you can enable features such as anti-malware, intrusion prevention, firewalling or one of the other security products that are integrated in Deep Security. With this setup, you should be ready to start testing the product and its extensive set of options to protect your environment.

The post Installing and configuring Trend Micro Deep Security, vSphere and NSX environment on AWS and Google Cloud appeared first on The Ravello Blog.


NFV Orchestration: Setup NFV Orchestration on AWS and Google Cloud (part 4 of 4 post series)

$
0
0

NFV-orchestration

Authors:
Jakub Pavlik
Jakub Pavlik and Ondrej Smola are engineers at tcpcloud – a leading private cloud builder.
Matt Conran
Matt Conran is an independent network architect and consultant, and blogs at network-insight.net

This article details NFV orchestration using public cloud NFVI as a 4 part series. This post details setting up a fully functioning NFV orchestration with firewalling and load balancing services chaining, and comes with a fully-functional NFV service chaining topology with Juniper Contrail service chaining firewall and load-balancer services in a topology that you can access on Ravello and try out.

The NFV topology in this Ravello blueprint presents firewalling and load balancing Virtual Network Functions (VNF’s). There are prepared 3 use case scenarios showing FWaaS and LbaaS launched by OpenStack Heat template:

  • PFSense - free Open Source FreeBSD based firewall, router, unified threat management, load balancing, multi WAN, Linux.
  • FortiGate-VM - is a full-featured FortiGate packaged as a virtual appliance. FortiGate-VM virtual appliance is ideal for monitoring and enforce virtual traffic on leading virtualization, cloud and SDN platforms, including VMware vSphere, Hyper-V, Xen, KVM, and Amazon Web Services (AWS).
  • Neutron Agent-HAproxy - free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

Architecture Components

The following diagram shows logical architecture of this blueprint. OpenStack together with OpenContrail provides NFV infrastructure. The virtual resources are orchestrated through Heat and then different tools are used for VNFs management.

image01

Components

  • NFV - Service Chaining in OpenContrail through VMs by OpenStack
  • VNF - orchestrator for VMs containing FwaaS (FortiGate, PFSense), LbaaS (Neutron plugin HAProxy)

The NFV topology consist of 5 nodes. The management node is used for public IP access and is accessible via SSH. It is also used as a JUMP host to connect to all other nodes in the blueprint. The controller node is the brains of the operation and is where Openstack and OpenContrail are installed. Finally, we have three compute nodes named Compute 1, Compute 2 and Compute 3 with Nova Compute and the Opencontrail vRouter agent installed. This is where the data plane forwarding will be carried out.

The diagram below display the 5 components used in the topology. All nodes apart from the management node have 8 CPU, 16GB of RAM and 64GB of total storage. The management node has 4 CPU, 4GB of RAM and 32GB of total storage.

image11

The intelligence runs in the Controller who has a central view of the network. It provides route reflectors for Opencontrail vRouter agents and configure them to initiates tunnels for end point connectivity. OpenContrail transport is based on well known protocols of MPLSoverUDP, MPLSoverGRE or VXLAN. The SDN controller can program the correct next hop information to direct traffic to a variety of devices by playing with labels and next hop information.

Previous methods for services chaining include VLANs and PBR, which are cumbersome to manage and troubleshoot. Traditional methods may require some kind of tunneling if you are service chaining over multiple Layer 3 hops. The only way to provide good service chaining capabilities is with a central SDN controller. The idea of having a central viewpoint of the network is proving to be a valuable use case for SDN networks.

Internal communication between nodes is done over the 10.0.0.0/24 network. Every node has one NIC on the 10.0.0.0/24 network and the Management and Controller nodes have an additional NIC for external connectivity.

Installation of OpenStack with Open Contrail

For the installation of Juniper Contrail we used the official Juniper Contrail Getting Started Guide.

The name and version of package contrail-install-packages_2.21-102~ubuntu-14-04juno_all.deb This will install both OpenStack and OpenContrail.

From the diagram below you can see that the virtual network has 5 instances, 9 interfaces and 4 VN’s for testing. The OpenContrail dashboard is the first place to view a summary of the virtual network.

image02

Login information for every node:
User: root
Password: openstack

also

User: ubuntu
Password: ravelloCloud

Login to openstack and opencontrail dashboards:
User: admin
Password: secret123

Openstack dashboard url depend on ravello public ip for controller node but is always x.x.x.x/horizon. For example:
http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com/horizon/

OpenContrail dashboard is on same url address but on port 8143. For example:
https://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:8143/login

NOTE: For properly working vnc_console in openstack you should change line “novncproxy_base_url” on every compute node in /etc/nova/nova.conf to your url of controller.

Example:
novncproxy_base_url = http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:5999/vnc_auto.html

The two services we will be testing are load balancing and firewalling service chaining. Load balancing will be created on the LbaaS agent and firewalling will be based on Fortigate and PFSense.

Within OpenStack we create one external network called “INET2”, which can be accessed from the outside (Management and Compute nodes in ravello).

The “INET2” network has a floating IP pool of 172.0.0.0/24. The pool is used to simulate public networks. The simple gateway for this network is on Compute2.

All virtual instances in openstack can be accessed from OpenStack dashboard. Through console in instance detail.

OpenStack Heat Templates

Heat is the main project of the OpenStack orchestration program. It allows users to describe deployments of complex cloud applications in text files called templates. These templates are then parsed and executed by the Heat engine.

image16

OpenStack Heat Templates are used to demonstrate load balancing and firewalling inside of Openstack.

The location of these templates is on the Controller node in the /root/heat/ directory. Every template has two parts - an Environment with specific variables and Template. They are located in:

/root/heat/env/
/root/heat/template/

We have 3 heat templates to demonstration the NFV functions.

  • lbaaS
  • pfsense firewall - opensource firewall
  • fortigate vm firewall - 15 day trial version

You can choose from two main use case scenarios:

LbaaS Use Case Scenario

To create the heat stack with the LbaaS function use the command below:

heat stack-create -f heat/templates/lbaas_template.hot -e heat/env/lbaas_env.env lbaas

This command will create 2 web servers and lbaas service instances.

The load balancer is configured with VIP and floating IP which can be accessed from "public" (Management and Compute nodes in Ravello)

Firewalls (FwaaS) Use Case Scenarios

To create the heat stack for the pfsense function use the command below:

heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_pfsense_env.env pfsense

To create the heat stack for the fortigate function use the command below:

heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_fortios_contrail.env fostios

This will create service instance and one ubuntu instance for testing.

Description of Load balancing Use Case

The Heat templates used for the load balancer profile will create a number of elements including the pool, members and health monitoring. It instructs OpenContrail to create service instances for load balancing. This is done through Openstack Neutron LBaaS API.

More information can be found here.

The diagram below displays the load balancer pools, members and the monitoring type:

image03

image15

image14

The load balancing pool is created on a private subnet 10.10.10.0/24. A VIP is assigned, which is allocated from the virtual network named public network. The subnet for this network is 10.10.20.0/24.

The load balancer consists of 2 ports to the private network and 1 port to public network. There is also floating IP assigned to VIP that is used for reachability from outside of OpenStack/OpenContrail.

The diagram below summarises the network topology for the virtual network:

image00

For testing purposes the load balancing heat templates create 2 web instances in the private network. There is also a router connected to this private network. This is because after boot the web instances will attempt to download, install and configure apache2 web service.

The diagram below displays the various parameters with the created instances:

image06

Accessing the web server's VIP address initiates classic round robin load balancing.

NOTE: Sometimes web instances does not install or configure apache2. This because of virtual simple gateway was not automatically created on compute2. In this case just create this gateway manually from python command located in /usr/local/sbin/startgw.sh on compute2. After that you can delete heat stack with lbaas and create it again or just set up apache2 manually.

CURL is used to transfer data and test the load balancing feature. The diagram below displays running command line CURL to the VIP address and a round robin results of instance 1 and 2.

image08

Description of FWaaS/NAT

Description of FWaaS/NAT

We have prepared one heat templates for firewall service instance with NAT and two heat environments for this template. One for pfsense firewall and second for fortigate firewall.

PfSense
Information about this firewall can be found here.

Login information
User: admin
Password: pfsense

Fortigate
Information about this firewall can be found here.

Login information
User: admin
Password: fortigate

NOTE: Compute2 has to have default gateway for testing. Viz. Lbaas.

Fortigate provisioning

This action must be taken after Fortigate VM is successfully deployed by Heat Template. Openstack is running instance MNMG01. This instance is used for configuration of Fortigate service instance.

The configuration can be done with two python scripts.

fortios_intf.py
fortios_nat.py

fortios_intf.py - this script will configure the interfaces for the firewall
fortios_nat.py - this script will configure the firewall NAT rules

Running scripts:
python fortios_intf.py
python fortios_nat.py

NOTE: Configuration information are stored in .txt files.
fortios_intf.txt
fortios_nat.txt

Network Topology

The firewall service instance is connected into 3 networks. It has INET2 as external network, private_net for testing instances and svc-vn-mgmt for management instance. The topology is same for both examples (pfsense and fortigate). In private_net is one virtual instance for testing connectivity to external network.

image10

For successful service chaining heat will also create policy in contrail and assign it to networks. Contrail is used to orchestrate the service chaining.

image09

image05

Configuration and testing pfsense

By default, pfsense firewall is configured to NAT after the heat stack is started. As a result, there is no need to make any configuration for this function. Pfsense image was preconfigured with DHCP services on every interface and there is outbound policy for NAT.

After we start the heat with pfsense there is already functional service chaining. Testing instance has default gateway to contrail and contrail redirects it to pfsense.

image07

There is also NAT session in pfsense. In shell run command:

pfctl -s state

image12

Configuration and testing fortigate

Fortigate can be configured from the management instance. This instance has floating ip 172.0.0.5 and login is root and password openstack or it can also be accessed through vnc console from openstack dashboard. In this instance are 2 python scripts. One of the python scripts is for the configuration of interfaces (fortios_nat.py) and second is for configuration of firewall policy NAT (fortios_intf.py).

NOTE: If fortigate firewall has different ip that 10.250.1.252 than it has to be change information in /root/.ssh/config.

python fortios_nat.py

image17

python fortios_intf.py

image13

After running these two scripts, testing instance has connectivity to external network.

image04

Interested in trying this setup with one click? Just open a Ravello trial account, and add this NFV blueprint to your account and you are ready to play with this NFV topology with Contrail orchestrating and service chaining load-balancer and firewall as VNFs.

The post NFV Orchestration: Setup NFV Orchestration on AWS and Google Cloud (part 4 of 4 post series) appeared first on The Ravello Blog.

Virtual training infrastructure: The backbone of hands on labs for ILT & self-paced learning

$
0
0

self-pace-training

Virtual training infrastructure is essential for ISVs, for training providers and for enterprises. It is key that this infrastructure will support the nature of the training use case - hands on labs for each student should be easily configured, identical and isolated instances; you should have the ability to spin them up in any geography, and then quickly and easily tear them down. This post demonstrates how three different companies are using Ravello as their virtual training infrastructure to run ILT classes, self-paced trainings and more.

What characterizes a “good” virtual training infrastructure?

There are several key components that should be “deal breakers” when you’re searching for virtual training infrastructure:

  1. Technological fidelity: your training infrastructure should accommodate any and all of the features you have in the environment. If it requires certain networking configurations - you should not need to compromise the quality and fidelity of the environment just to “make the training work”. Training is supposed to expose all the product capabilities and features, so there is no wiggle room here.
  2. Repeatable deployments: the trainer shouldn’t spend valuable time on setting up the same environment again and again every time a class is scheduled. Your training infrastructure should provide for a quick and easy way to save your environment configuration and run it whenever it is needed.
  3. On demand usage: the nature of the training use case is that it’s tough to anticipate the number of environments required in a given class (or timeframe, if it is a self-paced training use case). Your virtual training infrastructure should let you avoid capacity planning and simultaneously enable you to only pay for capacity that is in fact in use (so you don’t buy excess capacity “just in case”).
  4. Accessibility: the virtual training infrastructure should be accessible anywhere in the world. Location should not play a factor in access to or performance of the training lab.
  5. Flexibility: your training infrastructure should “work with you”. If all you need is an easy to use portal to share isolated environments with students - your infrastructure should accommodate that. If you require advanced integration and customization - the infrastructure should allow you as much complexity as your use case requires.

Virtual training infrastructure in action in the real world

Now that we covered the must-haves of your training infrastructure, I thought it might be useful to see some real-life examples of companies using Ravello for their virtual training.

Red Hat

Global partner training with hands on labs

When Red Hat’s Global Partner Enablement (GPE) team learned about Ravello, they were looking for training infrastructure that would allow them to:

  • Expose partners to all the features of their technology and products
  • Be able to scale up and down without the need for capacity planning
  • Deliver best-quality training classes around the globe - regardless of the location of the partner training class
  • Deliver high-volume training very quickly to keep up with development of new products and features - once the training class was designed to be able to repeat it quickly and not spend time setting it up again.

Using Ravello the Red Hat GPE team has full blueprints of multi-node OpenStack, RHEV, RHEL and OpenShift which they can spin up as required. With Ravello’s nested hypervisor, Red Hat can utilize AWS’s capacity and run virtual training labs in any geography, providing on-demand isolated environments for all students, without the need to allocate capacity in advance.

ROI Training

Instructor led end-user training with hands on labs

ROI Training, a leading provider of technical, financial and management training for enterprises around the globe, customizes training classes to suit the needs of their customer organization. Looking for a solution that will allow them to be rid of hardware investments, to move to a usage based cost structure and for student self-service access to virtual labs - ROI Training chose Ravello.

With Ravello ROI Training creates on demand lab environments in the public cloud region that provides the best learning experience for students, without investing in hardware or shipping computers. ROI Training also uses Ravello’s blueprints to essentially create a portfolio of lab environment templates for each course, that can be easily used to spin up multiple copies of the environment on demand. Finally, the usage based model allows ROI Training to enjoy cloud economics - and pay only for the environments that are running, for the resources consumed.

Symantec Blackfin

Fully integrated self-paced online security training

Blackfin Security is a leading provider of a full suite of online security training, bringing a hands-on approach to security training, with a self-paced subscription-based security training portal, as well as onsite or on-demand threat simulation events.

With two core requirements of zero changes to the application environments when deploying labs for students, and a high standard for a consistent and coherent user experience for self-paced trainings, Blackfin found they can fulfill their virtual training infrastructure needs by using Ravello to run training labs on AWS or Google Cloud.

Blackfin Enterprises uses Ravello’s REST API to create a new and enhanced self-paced online security training experience for its students. Ravello enables Blackfin to run the VMware based multi-VM applications on Google Cloud and AWS without any modifications to the VMs or the networking configuration.

[video url="https://www.youtube.com/watch?v=O8r4W8zwqjg"]

I hope these three examples illustrated how Ravello can meet any virtual training infrastructure requirements that you may encounter in your use case. You can start your trial here and build out your training lab for ILT, self-paced training, hands-on labs and more. Let us know if you have any questions.

The post Virtual training infrastructure: The backbone of hands on labs for ILT & self-paced learning appeared first on The Ravello Blog.

Performance tuning for nested ESXi on Ravello

$
0
0

racing

Ravello has been pioneering nested virtualization for a while now and we recently launched a nested ESXi solution running on AWS and Google Cloud. In fact Ravello currently provides the only way to run nested VMware ESXi or nested KVM on the the public cloud. If you’re just getting started, this non-dummies guide with a collection of all the useful how-to docs is the best place to start.

For advanced users, below are some performance tuning tips and tricks to make sure that your nested ESXi lab on Ravello has good performance and stability. We are continuing to enhance the platform on our side but in the meantime some of these tweaks can help circumvent known performance issues.

  1. Use the Ravello application scheduled task feature to start all ESX Host in the application in a delayed order and post all other VMs (e.g : start the ESX hosts 5 minutes after all other VMs in the application, and one by one with 2 minutes delay between them). This will help with I/O and CPU intensive workloads. In addition to the hosts themselves, starting nested guests on each ESX host should be done in an orderly fashion rather than simultaneously.
  2. For the nested *guests* always use para-virtual device types. PVSCSI, VIRTIO, LSI-logic for Disks and VMXNET/VIRTIO for Network. (Emulated devices like IDE will result in poor performance at boot time and post)
  3. Never over-commit resources for ESX hosts nested VMs. (Sum of cpu/mem allocated to all guest VMs combined should not exceed the cpu/mem of the host VM).
  4. Make sure to use only E1000 NICS for the ESX *host* VM. (As mentioned in our installation documentation, VMXNET3 for the ESX host will not function properly). This follows the best practices outlined by VMware in all nested ESXi related blogs and documentation.
  5. Nested guest should not use more that 2 CPU.

There are lots of active discussions on our ESXi community support forum - join in and share your feedback.

The post Performance tuning for nested ESXi on Ravello appeared first on The Ravello Blog.

8 ways how networking in AWS VPC differs from data-center

$
0
0

Advanced Enterprise Networking In AWS EC2 - A Hands On Guide

Public cloud is great. It brings to table agility, on-demand capacity, unlimited resources, global reach and most importantly ability to align costs with operational needs. More and more organizations are embracing cloud to augment their datacenter capacity or to move their existing workloads off their datacenters. As datacenter and network architects embark on this journey, they need to be aware that there are some inherent differences in the way networking works in an Amazon VPC when compared to datacenters – so that they can plan ahead.

  1. Broadcast & Multicast unsupported – Over the years, networking vendors have developed many cutting-edge features that rely on broadcast and multicast packets. Advanced functionality such as High Availability and clustering rely on broadcast packets (e.g. GARP, HSRP) for efficient failover. Since public cloud heavily filters broadcast and multicast packets, network architects need to rethink their deployment topology, or rely on other mechanisms to achieve similar objectives.
  2. No Bridge – Many sophisticated data-center setups require Layer 2 bridging for variety of business reasons – such as extending the application framework developed for a campus beyond its geographical area, workload mobility across zones etc. Public cloud doesn’t offer Layer 2 bridging capabilities, and while there are ways to ensure the same business objectives on public cloud, they all require one to re-architect the application and rely on Layer 3 protocols that offer a different scale of performance objectives than what is available in data-centers.
  3. Different appliances (and limitations) – One doesn’t have access to the same version of the network appliances (and hence the feature-set) that they are used to in datacenters on public cloud. Most network vendors have launched a version of their appliance on public cloud – but capabilities supported by the Amazon Machine Image or Google Compute Image of these appliances are typically ‘stunted’ compared to their data-center cousins.
  4. Limited number of Network Interfaces – Certain deployments (e.g. deploying a leaf-spine switching architecture) typically requires >20 NICs / VM. The number of NICs available per compute instance on public cloud may prove to be prohibitive forcing one to re-architect the deployment from scratch (e.g. ENIs supported by AWS vary between 2-8 depending on the instance).
  5. Single CIDR address block/VPC – AWS supports a single CIDR address block per VPC. Subnets within a VPC need to be addressed from this range. Default VPCs are assigned a CIDR range of 172.31.0.0/16 and subnets within are assigned /20 netblocks within the VPC CIDR range. If your existing datacenter based application topology uses multiple addressing ranges and different netmasks, you will need to reconfigure the setup before you can migrate it to a VPC.
  6. No replication of IP addresses – Datacenter admins have complete control of the network inside their DC, where they can create isolated networks that can run parallel copies of application with same IP address for pre-production testing. VPC doesn’t allow one to replicate IP addresses. An IP address assigned to a running instance in a VPC can only be used again by another instance once that original running instance is in a “terminated” state.
  7. No Span Ports / Port Mirroring – Many enterprises and security companies require span ports, or mirrored ports to tap the traffic on the wire for monitoring purposes. Due to lack of support for Layer 2 networking on public cloud, span ports is unsupported on AWS VPCs.
  8. Limited supported Gateway devices – AWS VPC does have a growing number of customer gateway devices that work with it, but the list is currently limited to seven vendors. If your organization doesn’t use one that is in the list, you will have to switch vendors to be able to support vendors.

Ravello is an overlay cloud that runs on top of AWS and Google cloud and overcomes the limitations mentioned above. Ravello’s networking overlay and nested virtualization technology enables organizations to run their existing data-center workloads and virtual appliances (same version as they have in DC) with complete Layer 2 capabilities on top of AWS and Google cloud – without modifying application, networking or configuration.

Interested in trying Ravello? Just open a Ravello trial account, and drop us a line.

The post 8 ways how networking in AWS VPC differs from data-center appeared first on The Ravello Blog.

Ravello free service for all 2015 and 2016 vExperts

$
0
0

on_ravello_logo5_on-ravello-12

Congratulations to all the newly minted vExperts of 2016! I’d like to personally thank each and every one of you because as Corey rightly said in the announcement, “each of these vExperts have demonstrated significant contributions to the community and a willingness to share their expertise with others”.

vexpert 2016

Here, at Ravello Systems we’ve been blown away by the level of expertise and enthusiasm that each vExpert brings to the table. It’s been a personal privilege and honor for me to work with the vExpert class of 2015. As of today, there are 400 vExperts all over the world using Ravello for all sorts of interesting labs from OpenStack to ESXi to networking labs. Many of them have been kind enough to share their opinion of Ravello or share expertise by describing how they setup their own Ravello labs. If you’re curious you can find all the links in our community blogs section.

on_ravello_logo5_on-ravello-12While the sharing directly helps the entire virtualization community, it also indirectly helps Ravello because like all new technologies, we rely on folks sharing their positive experiences with each other. So please join me in thanking the vExperts who blogged, presented and tweeted about their Ravello experiences --because their efforts have had a direct impact on our ability to extend the Ravello free lab service for all 2015 and 2016 vExperts. I’m very excited to extend the program this year and if you’re a vExpert I hope you will make the most of your Ravello lab for learning and playing with all sorts of cool things... and that you will pay it forward to the community and to next year’s class by taking the time to share your own opinions and experiences.

So if you are a 2015 or 2016 vExpert who doesn’t have a Ravello account yet, you can register here: www.ravellosystems.com/go/vexpert and get started right away. If you already have a vExpert account, you will continue getting 1,000 free CPU hours each month. Feel free to email support@ravellosystems.com if you have any questions or need any help.

The post Ravello free service for all 2015 and 2016 vExperts appeared first on The Ravello Blog.

How to setup and run a penetration testing (pentest) lab on AWS or Google Cloud with Kali Linux, Metasploitable and WebGoat

$
0
0

PenTest_Image

Author:
Clarence Chio
Clarence works at Shape Security on the system that tackles malicious bot intrusion from the angle of big data analysis. Clarence has presented independent research on Machine Learning and Security at Information Security conferences in several countries, and is also the organizer of the "Data Mining for Cyber Security" meetup group in the SF Bay Area.

In this blog, I describe how you can deploy Kali Linux and run penetration testing (also called pen testing) on AWS or Google Cloud using Ravello System’s nested virtualization technology. This ‘Linux/Web Security Lab’ lets you hit the ground running in a matter of minutes and start exploiting security vulnerabilities. By the way, if you haven’t already seen it, this blog by SimSpace about on-demand Cyber Ranges on Ravello is very interesting as well.

You’ve been living under a rock if you haven’t noticed the high profile security breaches that have shaken the technology industry in recent years. From huge government spying scandals to the countless company databases infiltrations, we have never been more aware of the need for securing the complex systems on which we so heavily rely on. Security awareness is at an all-time high, but the information security profession largely still remains out of reach for most in the tech industry. What exactly do penetration testers do? How does fuzzing or reverse engineering help to make networks and systems more secure? This blog post aims to help give beginners and security amateurs some hands-on experience in using popular systems and tools used by security professionals to help keep those black hats out.

It’s difficult to embark on your ethical-hacking endeavors by trying to find vulnerabilities in an ATM. That’s kind of like learning to swim by swimming across the English Channel. You want to build up some water-confidence and learn the strokes before you enter the big leagues. This is precisely why ‘deliberately vulnerable’ systems such as Metasploitable (by Rapid7) and WebGoat (by OWASP) were born. Making use of the built-in security vulnerabilities in these systems, you can get familiarized with the tools used in real-world vulnerability assessments and learn more about how systems have been compromised in the past. You will be surprised at how many of these old vulnerabilities still exist in modern systems that we use everyday.

If you not sure of what you’re doing, it’s a generally not good idea not to deliberately execute vulnerable code on your machine. Sandboxing these applications in a Virtual Machine (VM) is a good way to ensure that attackers don’t get into your system while you’re learning the ropes. However, setting up these VMs correctly and securely can be quite a bit of work. It requires procuring necessary hardware, getting the appropriate permissions to execute these mock tests,securing the VMs, so nothing leaks out into your corporate network and much more. What if you could procure the necessary hardware on demand on public cloud and build these completely sandboxed environments which represent your corporate network topologies and system setup to learn and execute penetration testing exercises? Public clouds for right reasons don’t easily allow building and running penetration testing, because of the impact it can have on their other customers on a shared infrastructure. You can still do some of this testing on AWS, however, you have to go through an approval and setup process.

Ravello’s HVX nested virtualization technology implements a fully fenced L2 overlay network on top of AWS and Google Cloud, so you can set up Security Smart Labs with multiple systems/VMs with complex networking representative of corporate environments, namely, promiscuous mode, multiple NICs, static IPs and more. You can build environments with multiple systems, test and run the environment on AWS or Google Cloud and save them Ravello blueprints. Ravello blueprints provide with capability to save entire environments and spin up multiple isolated copies across the globe on AWS and Google Cloud within minutes. This can be used to provision on-demand security labs for pen testing training, sales demos and POCs.

Section I: Setting Up Your Environment

In this brief walkthrough, we will get a simple and extensible environment set up in Ravello with 3 VMs - Kali Linux, Metasploitable 2, and WebGoat 7.0 running on Ubuntu. Kali is a Linux distribution based off Debian, designed for penetration testing and vulnerability assessments. More than 600 penetration testing tools applications come pre-installed with the system, and is today’s system of choice for most serious ethical hackers. Metasploitable is an intentionally vulnerable Linux VM, and WebGoat is a deliberately insecure web application server with dozens of structured lessons and exploit exercises that you can go through. After getting the lab environment setup, we will run through a couple of simple examples where we use Kali as a base for launching attacks on Metasploitable and WebGoat. By the end of this exercise, you will have successfully exploited your first Linux system and web server.

To get started, first ensure that you have a Ravello account and search for the ‘Linux/Web Security Lab Blueprint’ published by me on the Ravello Repo. Select ‘Add to Library’, and proceed to the Ravello dashboard.

image15

After selecting the ‘Library’ → ‘Blueprints’ tab on the dashboard sidebar, you can then select the blueprint you just added to your library and click the orange ‘Create Application’ button. This will take you to the ‘Applications’ section of the dashboard, where you can launch the application by publishing it to the cloud.

image04

Publishing the application will launch these VMs on a cloud environment, made possible by Ravello’s nested virtualization technology. It will take roughly 10 minutes for the VMs to launch. Once you see that all 3 VMs are running, we will then be ready to enter the boxes. Using the ‘Console’ feature of the Ravello platform is the easiest way to get command line or graphical access to the boxes within your web browser. You can also SSH into the boxes in your own terminal by following the instructions provided in the ‘More’ tab in the bottom of the dashboard right sidebar under ‘Summary’.

Enter all the boxes through the console and find out each VM’s IP address (usually 10.0.0.*) either through the command line (ifconfig) or by looking at the top right hand corner of the console page.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

Section II: Exploiting Metasploitable with Armitage on Kali Linux

Let’s enter the Kali Linux console, which will bring you through the boot and login sequence of the OS. You can either boot from the image or install the OS - I prefer the former because there is no need (in this case) for any state to be saved between sessions.

The main tool that we will be exploring today is Armitage. Armitage is ‘a graphical cyber attack management tool for the Metasploit Project that visualizes targets and recommends exploits’. We will make use of exploits that Armitage recommends and see just how easy it is to exploit a vulnerable Unix system like Metasploitable. From the Kali desktop, launch a terminal window.

image06

Armitage requires PostgreSQL to be running in the background, and also requires some Metasploit state to have been initialized. Execute the following commands to meet these requirements and launch armitage:

$ service postgresql start
$ service metasploit start
$ service metasploit stop
$ armitage

image01

This will bring up a window where you have to configure Armitage’s connection to Metasploit. The default settings are shown in the above screenshot, and the username:password ‘msf:test’ will work.

image02

Allow Armitage to start Metasploit’s RPC server.

image09

Once in Armitage, do a ‘Quick Scan (OS detect)’ of the Metasploitable VM by entering it’s IP address into this dialog box. As you might guess, the Quick Scan function of Armitage allows you to scan a range of IP addresses and discover all machines in that range by performing an ‘nmap’ scan.

image13

Once the scan is complete, you’ll see that there will be a Linux machine icon that appears in the canvas area of the Armitage window. The scan has detected that the machine is running Linux, and Armitage has further determined a whole range of attacks that the machine may be vulnerable to.

image10

Let’s try to launch a Samba "username map script" Command Execution attack on the machine. According to Metasploit’s exploit database, ‘This module exploits a command execution vulnerability in Samba versions 3.0.20 through 3.0.25rc3 when using the non-default "username map script" configuration option. By specifying a username containing shell meta characters, attackers can execute arbitrary commands. No authentication is needed to exploit this vulnerability since this option is used to map usernames prior to authentication!’

image14

The default options will work just fine.

image12

After the attack has been launched, you will know that it is successful when you see that the original icon has changed.

image08

Congratulations, you have exploited your very first linux box. Right clicking the icon will reveal a whole range of new interactions that you can now have with the Metasploitable VM - without ever having to enter the username and password at all! Select the ‘Interact’ option as shown in the below screenshot. This brings up a console, which allows you to execute arbitrary code.

image03

You can do all sorts of things, like echoing a friendly statement to /tmp/pwn on the box.

image00

You can verify your action by switching to the Metasploitable VM console and checking to see if the changes you made are indeed reflected there.

image07

Of course, this just scratches the surface of what you can do with Armitage, and the 600+ other penetration testing tools on Kali. Spend time exploring the tools and understanding what it does under the surface. It will be worth it.

Section III: Exploiting Webgoat

We will work on exploring Webgoat’s extensive range of web application vulnerability tutorials next. Enter the Webgoat console and execute the Webgoat jar file in the background to start the server. You do this by entering

$ nohup java -jar /opt/app/webgoat-container-7.0-SNAPSHOT-war-exec.jar &

This command executes the Webgoat java server in the background, ignoring the HUP (hangup) signal, so the server will continue to run even if the shell is disconnected. The server will take a couple of minutes to initialize and start up.

image05

Next, switch to the Kali desktop and navigate to the Webgoat URL. In my case, it is http://10.0.0.11:8080/WebGoat since my WebGoat VM has 10.0.0.11 as it’s IP address. Login with any of the credentials presented to you on the login screen, then navigate to the ‘Shopping Cart Concurrency Flaw’ exercise. This is one of the simplest and most elegant exploits of a ecommerce web application. I assure you that variants of this exploit exists in some websites out there.

image16

This exercise exploits the web application’s flawed shopping cart logic that allows a user to purchase an expensive item for the price of a less expensive item. As you may have guessed from the title of the exercise, you will need two browser tabs open on this page for this to work. Then, you have to follow the following sequence of steps carefully.

  • In one tab, you will purchase a low-priced item by updating it ‘Quantity’ to 1, updating the cart, then selecting ‘Purchase’.
  • In the other tab, update the ‘Quantity’ of the highest-priced item to 1, then update the cart. Do not select ‘Purchase’.
  • Return to the first tab where you were buying the low-priced item and complete the purchase.
  • You have purchased the high-priced item but paid the low-price for it.

image11

Many of the exercises in WebGoat demonstrate real web application vulnerabilities that OWASP has identified to be the most common in modern web applications. If you want a complete and hands-on education in web application security, there is no better place to being.

Section IV: Fin

If you went through the above sections, you have successfully exploited a Linux machine and tricked a web application with just a few clicks. However, don’t be misled by the simplicity of the above exercises! Penetration testing and vulnerability assessments are often extremely complex, tedious, and sometimes discouraging. Playing with toy systems that are intentionally insecure will help you get familiar with tools and understand the reasons why insecure systems are insecure. It will help you to build applications with security in mind, and become more conscious of the dangers of careless software development.

When you have spent some time playing in the lab, I strongly encourage you to use the lab to build an environment that allows you to perform vulnerability assessments on your own systems. Ravello’s flexibility allows you to create a close replica of system and network infrastructures within a sandbox that can be repeatedly spun up and destroyed with a few clicks.

Lastly, keep in mind that breaking into computer systems is illegal. Most system administrators, government agencies, and companies don’t have a great sense of humor, and you don’t have to do any real damage to get into a considerable amount of trouble. Just trying to break into a system is a serious offence in many jurisdictions.

The post How to setup and run a penetration testing (pentest) lab on AWS or Google Cloud with Kali Linux, Metasploitable and WebGoat appeared first on The Ravello Blog.

Oracle Buys Ravello Systems

$
0
0

I am thrilled to share that Ravello Systems has entered into an agreement to be acquired by Oracle. The proposed transaction is subject to customary closing conditions. Upon closing of the transaction, our team will join the Oracle Public Cloud (OPC) organization and our products will become part of Oracle Cloud. We believe this agreement […]

The post Oracle Buys Ravello Systems appeared first on The Ravello Blog.


How to configure neutron networking in your OpenStack lab (Liberty)

$
0
0

liberty-logo

This blog describes how to work with OpenStack networking on Ravello Systems, which enables OpenStack lab environments in the cloud. When configuring networks in an OpenStack environment on Ravello, you are essentially setting up nested KVM and overlay networks. Ravello’s nested virtualization technology abstracts the underlying the AWS and Google Cloud networking and presents a clean Layer 2 network. So, the OpenStack environment that’s running in Ravello utilizes this overlay network and configures its own overlay using the neutron networking service (did we say Inception!!!). I will use a pre-configured blueprint to describe how to setup OpenStack networking in Ravello. You can download this blueprint from Ravello Repo.

Start by launching the blueprint, then use ssh to connect to your instance (use the keypair provided by Ravello). Follow the instructions below to create neutron networks, public and private subnets, use neutron router to route traffic between subnets.

Tuning network interfaces in an OpenStack environment on Ravello Systems

In our configuration we used VXLAN as our tunneling protocol. There are some things to consider when nesting overlay networks. An overlay network is basically a network that is layered on top of an existing network. When using VXLAN, headers are added to all packets sent by your instances. This information is stripped off by the time the packets reaches its destination. Since the network is software defined, it allows flexibility and rapid deployment of custom network configurations. However, the added overhead can cause fragmentation with can manifests as poor network performance. Due to the additional information added to packets the overall size will be bigger than the default max transfer unit of MTU. To compensate we can make the MTU at the instance level smaller. On the neturon-server adjust the default interface MTU setting to 1300 instead of 1500. As a result, this will significantly reduce fragmentation resulting in better performance.

Execute the following command to set the instance MTU to 1200

echo "dhcp-option-force=26,1300" >/etc/neutron/dnsmasq-neutron.conf

Note: The type of application driving network traffic must also be taken in consideration while tuning. What type of data are you working with (Steaming large files/OLTP/Small files). For production workloads I suggest more specific tuning of the MTU. Modify and measure to see what setting works best best.

Physical Layer (Overview)

Public Network 192.168.0.0/16
Tenant Network 10.10.0.0/16

image03

Working with Neutron

Start by change directory to root's home directory.

cd /root

Next, restarting all OpenStack services.

openstack-service restart

Now authenticate as keystone admin.

source keystonerc_admin

Note: Your prompt will change to show that you have sourced the keystone admin credentials

Delete all demo networks

Before we can start creating our network, we need to make sure that all demo networks are removed. We start by deleting the existing ports.

neutron port-list | awk -F"|" '{print $2}'| grep -v id | grep -v "^$" | while read PORT; do neutron port-delete $PORT; done

Next we clear the default router.

neutron router-gateway-clear router1

After the default router has been cleared, we then remove the private interface from the router.

neutron router-port-list router1 | awk -F"|" '{print $2}'| grep -v id | grep -v "^$" | while read PORT; do neutron router-interface-delete router1;neutron subnet-list |grep private_subnet | awk -F'| ' '{print $2}' done

Next we can delete the default router.

neutron router-delete router1

Now we delete the public network.

neutron net-delete public

Next we delete the private network.

neutron net-delete private

Before moving on to the next step, we should verify that all network components have been successfully deleted.

neutron net-list

Note: This command should return an empty list.

Create all three networks (public, private and associated subnets)

As we proceed with the network build-out, you can see how the neutron network comes together piece by piece.
We start by creating our public network.

neutron net-create public --shared --router:external

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image05

Next, we create our public subnet.

neutron subnet-create public 192.168.0.100/24 --name public-subnet --disable-dhcp --allocation-pool start=192.168.0.100,end=192.168.0.253 --gateway 192.168.0.2

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image04

Now we create our private network.

neutron net-create private --shared

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image07

Next, we create our private subnet.

neutron subnet-create private 10.10.0.100/16 --name private-subnet --dns-nameserver 8.8.8.8

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image06

Now we create a router.

neutron router-create router

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image01

Next, we add the network interfaces to our private subnet.

neutron router-interface-add router private-subnet

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image00

Now we attach the router’s gateway to the public network.

neutron router-gateway-set router public

Below is an example of the object that will be created in the OpenStack UI as a result of the command above.

image02

NOTE: After you set the router gateway, it disconnects you and you have to reconnect to node.
Since you were disconnected previously, you need to re-authenticate as keystone admin.

source keystonerc_admin

Lastly, we modify the default security-groups to allow SSH and ICMP.

neutron security-group-list | grep default | awk '{print $2}' | while read SEC ; do neutron security-group-rule-create --protocol icmp --direction ingress $SEC; neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 $SEC; done

In summary

If you would like to run through this procedure, you can launch the OpenStack Liberty Neutron Blueprint in the Ravello Repo and run through this configurations yourself. Enjoy!

The post How to configure neutron networking in your OpenStack lab (Liberty) appeared first on The Ravello Blog.

VMware NSX and Cisco Nexus 1000v Architecture Demystified

$
0
0

vmware-nsx

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Network virtualization brings many benefits to the table – reduced provisioning time, easier/cheaper network management, agility in bringing-up of sophisticated deployments to name a few. A large number of network and data-center architects around the globe are evaluating VMware NSX and Cisco Nexus 1000v to enable network virtualization in their data-centers. This article (part 1 of 3 part series) walks through the architectural elements of VMware NSX & Cisco Nexus 1000v, and explains how Ravello (powered by nested virtualization and networking overlay) can be used as a platform to run and deploy each of the solutions with a couple of clicks for evaluation during the decision-making process. Part 2 compares capabilities supported by Cisco Nexus 1000v and VMware NSX, and Part 3 walks through steps to create a Cisco Nexus 1000v & VMware NSX deployment on Ravello.

The Death of the Monolithic Application

The meaning of the application has changed considerably over the last 10 years. Initially, we started with a monolithic application stack where we had one application installed per server. The design proved to be very inefficient and a waste of server resources. When would an a single application ever consume all server resources? Almost never, unless there was a compromise or some kind of bug. Single server application deployment has considerable vendor lock-in, making it difficult to move the application from one server vendor to the other.

The application has now changed to a multi-tiered stack and is no longer installed on a single server. The application stack may have many dispersed tiers requiring network service such as firewalling, load balancing and routing between each tier. Physical firewall devices can be used to provide these firewalling services. Physical devices are evolved to provide multi tenancy by features such as VRFs and multiple context. But it is very hard to move firewalls in the event of an application stack move. If there is an disaster avoidance or recovery situation, App A might need to move to an entirely new location. If the security policies are tied to a physical device, how can its state and policies move? Some designs overcome this with stretch VLANs across DCI link and stretched firewall clusters. Both of which should be designed with care. A technology was needed to tie the network and security services to the actual VM workload and have it move alongside the VM.

The Birth of Microservices

The era of application microservices is coming aboard. We now have different application components spread across the network. More importantly, all these components need cross communication. Even though we moved from a single application / physical server deployment to application per VM on a hypervisor, it was still not agile enough. Microservice applications are now getting installed in Linux containers, Docker being the most popular. Containers are more lightweight than a VM, spinning up in less than 300 milliseconds. Kubernetes are also getting popular, resulting in massive agile compute environments. So how can traditional networking keep up with this type of agility? Everything that can is being virtualized with a abstracted software layer. We started with compute and storage and now the missing network piece is picking up pace.

Distributed Systems & Network Virtualization

Network virtualization was the missing piece of the puzzle. Now, that the network can be virtualized and put into software, it meets the agility requirements of containers and complex application tiers. The entire world of distributed systems is upon us. Everything is getting pushed into software at the edge of the network. The complexity of the network is no longer in the physical core nodes, it's at the edges in software. Today's network consists of two layers, we have an overlay layer and an underlay physical layer. The overlay is the complicated part and allows VM communications. Its entirely in software. The physical underlay is typically a leaf and spine design, solely focusing on forwarding packing from one end point to another. There are many vendors offering open source and proprietary solutions. VMware NSX and Cisco Nexus 1000v are some of the popular choices.

VMware NSX

VMware NSX is a network and security virtualization solution that allows you to build overlay networks. The decoupling / virtualization of networking services from physical assets displays the real advantages of NSX. Network virtualization with NSX offers the same API driven, automated and flexible approach much along the lines of what compute virtualization has done for compute. It enables changing hardware without having to worry about your workload networking which is preserved thanks to being decoupled from the hardware. There are also great benefits from decoupling security policy from its assets, abstracting security policy. All these interesting abstractions are possible as we are on the hypervisor and can see into the VM.

NSX provides the overlay and not the underlay. The physical underlay should be a leaf and spine design, limited to one or two ToR switches. Many implement just two ToR switches. Depending on port count density you might only need one ToR. Each ToR has a connection (or two) to each spine offering a high available design. Layer 2 designs should be limited as much as possible so to minimise the size of broadcast domains. The broadcast domain should be kept to small isolated islands as to minimise the blast radius should a fault occur. As a general design rule, Layer 2 should be used for what it was design for - between two hosts. Layer 3 routing protocols on the underlay should be used as much as possible. Layer 3 uses a TTL that is not present in Layer 2. The TTL field is used to prevent loops.

API Driven Solution

The hypervisor, referenced to as the virtual machine manager, is a device / program that enables multiple operating systems to share a single host. Hypervisors are a leap forward in fully utilizing server hardware as a single operating system per host would never fully utilise all physical hardware resources. It is here we have hypervisor hosts. Soft switches run in the hypervisor hosts and they implement Layer 2 networking over Layer 3 using the IP transport in the middle to exchange data. VMware’s NSX allows you to implement virtual segments in the soft switches and as discussed use MAC over IP. To support remote Layer 2 islands there is no need to stretch VLANs and connect broadcast and failure domains together.

VMware NSX supports complicated application stacks in cloud environments. It has many features including Layer 2 and Layer 3 segments, distributed VM NIC firewalls, distributed routing, load balancing, NAT, and Layer 2 and Layer 3 gateway to connect to the physical world. NSX uses a proper control plane to distribute the forwarding information to soft switches. The NSX cluster controller configures the soft switches located in the hypervisor hosts. The controller will have at a min of 3 nodes with a max of 5 for redundancy.

To form the overlay (on top of the underlay) between tunnel endpoints, NSX uses VXLAN. VXLAN has now become the defacto for overlay creation. There are three modes available - multicast, new unicast modes, hybrid modes. Hybrid modes use multicast locally and does not rely on the transport network for multicast support. This offers huge benefits as many operational teams would not like to implement multicast on core nodes. Multicast is complex. The core should be as simple as possible, concerned only with forwarding packets from A to B. MPLS networks operate this way and they scale to support millions of routes.

VMware NSX operates with Distributed Routers. It looks like all switches are part of the same router, meaning all switches have same IP and all listen to MAC addresses associated with that IP. The distributed approach creates one large device. All switches receive packet sent to the gateway and do Layer 3 forwarding.

One of the most powerful features of NSX is the VM NIC firewalls. The firewalls are In-kernel firewall and no traffic goes into userworld. One drawback of the physical world is that physical firewalls are a network choke point, they also cannot be moved to easily. Networks today need to be agile and flexible and distributed firewalls fit that requirement. They are fully stateful and support IPv4 and IPv6.

Nexus 1000v Series

The Nexus 1000v Series is a software-based NX-OS switch that add capabilities to vSphere 6 (and below) environments. The Nexus 1000v may be incorporated with other Cisco products, such as the VSG and vASA to offer a complete network and security solution. As many organisations move to the cloud they need intelligent and advanced network functions with a CLI that they know.

The Nexus 1000v architecture is divided into two main components - a) Virtual Ethernet Module (VEM) and b) Virtual Supervisor Module (VSM). These components are logically positioned differently in the network. The VEM is inside the hypervisor and executes as part of the ESXi kernel. Each VEM learns individually and in turn builds and maintains its own MAC address table. The VSM is used to manage the VEM’s. The VSM can be designed in the high available design (2 for redundancy) and control communication between the VEM and the VSM can now be Layer 3. When the communication was Layer 2, it required a packet and control VLAN configuration.

The Nexus 1000v can to be viewed as a distributed device. The VSM control multiples VEMs as one logical device. The VEM do not need to be configured independently. All the configuration is performed on the VSM and automatically pushed down to the VEM that sit in the ESXi kernel.
The entire solution is integrated into VMware vCenter. This offer a single point of configuration for the Nexus switches and all the VMware elements. The entire virtualization configuration is performed with the vSphere client software, including the network configuration of the Nexus 1000v switches.

image05

One major configuration feature of the Nexus 1000v is the use of port profiles. Port profiles are configured from the VSM and define the different network policies for the VM. They are used to configure interface settings on the VEM. When there is a change to a port profile setting, the change is automatically propagated to the interfaces that belong to that port profile. The interfaces may be connected to a number of VEM, dispersed around the network. There is no need to configure on an individual NIC basis. In vCenter a port profile is represented as a port group. They are then applied to individual VM NIC through the vCenter GUI.

Port Profiles are dynamic in nature and move when the VM is moved. All policies defined with port profiles follow the VM throughout the network. In addition to moving policies the VM also retains network state.

Conclusion

The terms network virtualization and decoupling go hand in hand. The ability to decouple all services from physical assets is key for a flexible and automated approach to networking. VMware NSX offers an API driven platform for all network and security services while existing vSphere & Cisco Nexus 1K deployments are CLI and orchestration driven. The advantages and disadvantages of both should be weight up not just from a feature parity but also from a deployment model approach. The NSX platform being a big bang approach.

If you are in the process of deciding between these two solutions, and want to actually try them out – Ravello Networking Labs provides an excellent platform to test VMware NSX and Cisco Nexus 1000v deployments with a couple of clicks. One can use an existing NSX or Cisco 1000v blueprint as a starting point to tailor to their requirement, or create one from scratch. Just open a Ravello trial account, and contact Ravello to get your ‘feet wet’ with an existing deployment topology that you can run on your own.

The post VMware NSX and Cisco Nexus 1000v Architecture Demystified appeared first on The Ravello Blog.

Choosing between VMware NSX and Cisco Nexus 1000v

$
0
0

vmware-nsx

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

With SDDC (Software Defined Data Center) gaining prominence, network architects, administrators and data-center experts in enterprises around the globe find themselves staring at the inevitable question – should I go for vSphere environment with Cisco Nexus 1000v or VMware’s NSX as the network virtualization solution that facilitates my SDDC? This article (part 2 of 3-part series) compares Cisco Nexus 1000v with VMware NSX from deployment model, components, multi-data-center support and network services perspective. Part 1 compares capabilities supported by Cisco Nexus 1000v and VMware NSX, and Part 3 walks through how to setup a fully functional environment of each on Ravello Networking Smart Labs (powered by nested virtualization and networking overlay).

VMware NSX for vSphere is built on top of vSphere Distributed Switch and cannot be run on top of Cisco Nexus 1000v. If you have vSphere environment already operating with Cisco Nexus 1000v and you are considering a jump to the API driven NSX world, this article will also help you understand the benefits and disadvantages of making that jump.

Deployment Model

VMware NSX is an entire suite of network and security services - in-kernel distributed firewalling / routing, load balancing, gateway nodes and redundant clusters, all components can be managed by a GUI platform. Cisco’s Nexus1000v being is an add in module for previous vSphere environments that may be integrated with other Cisco products such as the VSG and vASA.

From a platform to platform comparison, NSX and Cisco ACI are more comparable as they represent a full service suite, all tightly integrated. But if you have a Nexus 1000v deployed in your existing VMware environment, what benefits do you gain from upgrading to the entire NSX suite.

The NSX platform is not a gentle upgrade or something you can deploy in pockets / islands of the network. Planning is essential for an NSX upgrade. The NSX operates as an overlay model so it really is a big bang approach and requires entire team collaboration. While Cisco Nexus 1000K and its supporting products are add on modules and can be gently introduced, NSX requires a green field deployment model. The old and new networks could link, applications with their corresponding services could be migrated overtime. Green fields are less risky but parallel networks come at a cost.

Component Introduction

The NSX operates on the VDS and the feature set between the Nexus1K and the VDS are more a less the same. Depending on the release dates, one may outperform the other for a period of time but there is not too much of a difference. Not considering the additional integrated nodes NSX offers ( controller clusters, edge service gateways, cross-vCenter) it has two great new features sets - the distributed in-kernel firewall and distributed in-kernel forwarding.

NSX has Edge router functionality used as various components - VPN & Firewall, Load balancer and support Dynamic routing (BGP and OSPF). The edge distributed router sits in the control plane and communicates to the controller which in turn communicate to the NSX manager. The NSX edge services router sits in the data plane.

The Nexus 1000v has two editions - Standard and Enhanced. The standard edition is free to download with a CCO account and the enhanced edition requires a purchased licence. Enhanced supports additional features such as Cisco Integrated Security Features (ISF): DHCP snooping, IP source guard, and Dynamic ARP Inspection, TrustSec, VSG. Both of these versions share quite a few features and both can be integrated with additional Cisco products.

The following blox display the feature parity (from cisco.com)

Features Edition Edition
Layer 2 switching features: VLANs, private VLANs, loop prevention, multicast, virtual PortChannel (vPC), Link Aggregation Control Protocol (LACP), access control lists (ACLs), etc. Cisco Nexus 1000V Essential Edition: No Cost Cisco Nexus 1000V Advanced Edition (with Cisco VSG)
Network management features and interfaces: Cisco Switched Port Analyzer (SPAN), Encapsulated Remote SPAN (ERSPAN), and NetFlow Version 9; VMware vTracker and vCenter Server plug-in; SNMP; RADIUS; etc Included Included
Advanced features: ACLs, quality of service (QoS), and VXLAN Included Included
Cisco vPath (for virtual service insertion) Included Included
Cisco Integrated Security Features (ISF): DHCP snooping, IP source guard, and Dynamic ARP Inspection Not supported Included
Cisco TrustSec SGA support Not supported Included
Cisco VSG Supported Included
Other virtual services: Cisco ASA 1000V, vWAAS, etc. Available separately Available separately

The mains reasons for upgrading from a vSphere–Cisco Nexus 1000v environment is for architectural and operational benefits. From an operational perspective it may be simpler to have everything under one hood with NSX.

VMware NSX has clearly many additional components and network services than the Nexus 1000v. But if your business and application requirements are met with existing infrastructure based on Cisco 1K (with potentially other virtual services) you may choose to avoid the big-bang upgrade to NSX.

Multi-Data Center

NSX is a complete network and security solution that operates on the VDS. With the release of software version 6.2, NSX supports vSphere 6.0 Cross vCenter NSX. Previously, logical switches, routers, and distributed firewalls had a single vCenter deployment model. But now with 6.2 these services are deployed across multiple vCenters. This enables logical network and security services for a workloads to span multiple vCenters, even physical location. A potential use case for combining multiple physical data centres that have different vCenters. This new design choice by VMware NSX promotes the NSX Everywhere product offering.

NSX enables application and corresponding network / security service to span multiple data centers. All your resources are pooled together, the location of each is abstracted into a software abstraction layer. This offers a new disaster avoidance and disaster recovery model. For traffic steering, previous active - active data center designs might need additional kludges such as LISP, /32 host routing or HSRP localisation. Without proper configuration of these kludges, all east - west traffic could trombone across the DCI link. They all add to network complexity and only really deal with egress traffic. Ingress traffic still needs proper application architecture and DNS load balancing.

NSX is a proper virtualization platform and you don't need to configure extra kludges for multi data center design. It has a local egress optimization feature so traffic exits the correct data centre point and does not need to flow over the delicate DCI link.

Unlike Cisco ACI (comparable to VMware NSX), Cisco Nexus 1000v is not a complete solution for multi data centre support but it has capabilities to link data centre together. Similar to VMware NSX, the Nexus 1000v support VXLAN - MAC over IP technology. VXLAN is used to connect Layer 2 islands over a Layer 3 core so if you have applications that require Layer 2 adjacency in different data centers could you use Nexus 1000v as the DCI mechanism? Technically it's possible. The problem with VXLAN in the past has been its control plane and the initial releases of VXLAN required a multicast enabled core. The latest releases of Nexus 1000v do offer enhancements including Multicast-less mode, Unicast Flood-less mode,VXLAN Trunk Mapping and Multiple MAC Mode. The new modes increase the robustness of VXLAN. However, VXLAN was developed to be used in the cloud to support multi tenancy and this is how it will probably be developed with further releases. By itself, the Nexus 1000v doesn't offer great DCI features and capabilities. It may, however, be used in conjunction with other DCI technologies to become a more reliable DCI design.

Network Services

A major selling point for NSX is its ability to support VM-NIC firewalls. VMware has a built in distributed firewall feature allowing stateful filtering services to be applied at a VM NIC level. This gives you an optimum way to protect east - west traffic along with a central configuration point. Individual policies do not need to be configured on an individual NIC bases. All the configuration can be done on a GUI and propagated down to the individual VM NIC’s. The entire solution scales horizontally, as you add more compute host you get more VM NIC firewalls. Micro firewalls do not result in traffic tromboning or hairpinning, offering optimum any to any traffic.

By default, the Nexus 1000v does not offer a distributed firewall model but it can be integrated to support the VSG and the vASA. The additional models are supported in Standard and Enterprise. Both of these can then be managed by Cisco Virtual Network Management Center. The VSG is a multi tenant security firewall that implements policies that move with the mobile virtualized workloads. It decouples the control and data plane operations and connects to the Nexus1000v VEM using vPath technology. The VSG uses vpath to steer the traffic. It employs a scalable model, only the initial packet is sent to the VSG, subsequent packets are offloaded to vPath on the VEM.

VMware NSX allows you to decouple networking from the physical assets by leveraging the hypervisor edge - the new access switch. Due to the decoupling of the network functions from hardware also to virtualise those network functions.

The main driver for NSX is that its network virtualization approach is API driven. Network virtualization provides the abstraction from the physical assets and all this is API driven. Yes, you can automate using some sort of CLI wrapper but that approach just doesn't scale. Most CLI wrapping approaches fail as soon as it comes to looking at the entire lifecycle of a component, not only the creation. It is also possible to automate creation of an asset by writing different CLI scripts for certain actions. But what about advanced features – such as querying status and capacity, free resources, or removing assets? This would bring in a lot of operational complexity of what you need to do in a script. An API solution is far more superior and easy to manage than a CLI switch which is hidden behind an orchestrator.

Capability VMware NSX Nexus 1000K & vSphere
Multi Data Center Built in with local ingress support and promoted with NSX everywhere. Not a true DCI product but technical capable with additional technologies
Service Chaining Built in service chaining Service chaining with vPath
Distributed Firewall Built in distributed firewall. VM-NIC Add on modules
Edge Service Gateway Built in Potential Edge services with add on modules
Virtual Private Networks SSL, IPSEC, L2 VPN Potentially with add on modules
End to end activity Monitoring Traceflow N/A but has NetFlow, SPAN, and Encapsulated Remote SPAN
Services - DHCP & DNS Yes Yes
Abstracted Security Yes No
API driven Yes, full API solution Orchestrated/td>

Conclusion

So should you switch to VMware NSX from Cisco Nexus 1000v for your SDDC? The answer is - it depends. If your existing business and technical requirements are already being met and you don’t want to take a big-bang approach to change everything – Cisco Nexus 1000v is the way to go. If you are looking for a greenfield approach to build your SDDC with strong out-of-box integration with existing VMware resources – NSX will help you get there quicker.

Interested in trying out both Cisco and VMware solutions to get a feel for which one is right for you? Just open a Ravello trial account, and reach out to Ravello. They can help you run the Cisco Nexus 1000v and VMware NSX solutions showcased here with one click.

The post Choosing between VMware NSX and Cisco Nexus 1000v appeared first on The Ravello Blog.

How to run VMware NSX and Cisco Nexus 1000v on AWS & Google Cloud

$
0
0

vmware-nsx

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Network and data-center architects are evaluating network virtualization solutions to bring workload agility to their data-centers. This article (part 3 of a 3 part series) details how to setup fully-functional VMware NSX and Cisco Nexus 1000v deployment on Ravello to evaluate each of the solutions. Part 1 compares the architectural components of Cisco Nexus 1000v and VMware NSX, and Part 2 looks into the capabilities supported by of each.

Setting up Nexus 1000v and vSphere 6.0 on public cloud

In this section we will walk through setting up VMware vSphere 6.0 environment with the addition of Cisco’s Nexus 1000v on AWS & Google cloud using Ravello, and save a ‘blueprint template’ of the setup for a one-click deployment. VMware vSphere is a virtualization platform that by default comes with a standard and distributed virtual switch (DVS). The Nexus 1000v is a Cisco product integrated into vCenter for additional functionality. Similar to VMware VDS, it follows a distributed architecture and is Cisco implementation of the distributed virtual switch. The distributed virtual switch is a generic term among all vendors. The Nexus 1000v is a distributed platform that uses a VEM module for data plane and VSM module for control plane. The VEM operates inside the VMware ESXi hypervisors.

image02

The setup consists of a number of elements. The vCenter server is running on Windows 2012 server (trial edition) not as an appliance and acts as the administration point for the virtualized domain. Two ESXi hosts are installed with test Linux VM’s. Later, we will install the Nexus 1000v modules, both VEM and VSM, on one of the ESXi hosts. The vSphere client version 6 is also installed on the Windows 2012 server. The ESXi host have default configurations including the standard vswitch and port groups.

The architecture below is built to a working blueprint enabling you to go on a build a variety of topologies and services. The Nexus 1000v standard is installed on a Enterprise plus license vSphere environment. One requirements for Nexus 1000v deployment is an enterprise plus licence. We currently have two ESXi hosts and one vCenter. A flat network of 10.0.0.0/16 is used as we have IP connectiity between all hosts.

We install the Nexus version (Nexus 1000v.5.2.1.SV3.1.5a-pkg.zip), which is compatible with vSphere 6.0. It can be downloaded from the Cisco Website for free with your Cisco CCO. There are two versions of Nexus 1000v available - Standard and Enterprise Plus. Enterprise has additional features requiring a license. The standard addition has a slightly reduced feature set but its free to download. This blueprint uses the standard edition.

Nexus 1000v Installation

Once downloaded you can deploy the OVA within vcenter. There are a number of steps you have to go though such as setting the VSM domain IP and management address etc.

image03

Once finished you should be able to see the N1K deployed as a VM in your inventory. Power it on and SSH to the management IP address. The N1V has the concept of control and packet VLANs. It is possible to use VLAN 1 for both. For production environments, it is recommended to separate these. This blueprint uses Layer 3 so we don't need to do this.

Next, we must register Nexus 1000v with vCenter by downloading the Nexus 1000v extensions and entering to vCenter. Proceed to go to the WEB GUI of the VSM and right click the extension type. Once complete you can import the extension as a plugin to centre.

Cisco Nexus 1000V

Now we are ready to log back into the VSM and configure it to connect to the vcenter. Once this is done you vCenter you will see the Distributed Switch created under Home > Inventory > Networking.

Next, we install the VEM (Virtual Ethernet Module) on the ESXi host and connect the host to the N1K VSM. Once the VEM is installed you can check it status and make sure it's connected to the vCenter.

The following screen shows the VSM connected to vCenter.

image11

The following screen shows the VEM correctly installed. This steps needs to be carried out on all ESXi host that require the VEM module. Once installed the VEM gets its configuration from the VSM.

image05

Now, you are ready to build a topology by adding host to your Nexus 1000v. For example, install the VEM in the other ESXi host and add an additional VSM for high availability. With Ravello this is easy to do, simply save the ESXi host to the library and add into the setup. Remember to change DNS and IP settings on the new ESXi host. Once this deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click.

image09

Setting up VMware NSX on public cloud

VMware NSX is a network and security virtualization platform. The entire concept of virtualization involves the decoupling of the control and data plane, offering a API to configure the network services form a central point. NSX abstracts the underlying physical network and introduces a software overlay model that rides on top of the physical network. The decoupling permits complex network services to be deployed in seconds.

The diagram below displays the NSX blueprint created on Ravello. Its design is based around separation into clusters, for management and data plane reasons.

image00

The following are summary of prerequisites required for NSX deployments:

  • The standard vsphere client cannot be used to manage NSX. For this reason, a NSX a vSphere web client is used.
  • A vCenter Server (version 5.5 or later) with at least 2 cluster. For multi-vCenter deployments you will require vCenter version 6.0.
  • NTP and DNS.
  • Deploy distributed virtual switches instead of standard virtual switch. The VDS perform the foundation of the overlay VXLAN segments.

The following ports are required

  • TCP Port 80 and 443 for vsphere communication and NSX REST API
  • TCP Ports 1234, 5671 and 22 for host to controller cluster communication, RabbitMQ message bus and SSH access.

NSX manager and its components require considerable amount of resources. Pre-install checks should check for CPU, memory and disk space required for NSX Manager, NSX Controller and NSX Edge requirements.

The NSX deployment consists of a number of elements. The two core components are a NSX manager and a NSX controller. The NSX manager is an appliance that can be downloaded in OVA format from VMware's website. The recommended approach is to deploy the NSX manager on a separate management cluster, separate from the compute cluster. The separation allows the decoupling of management, data, and control plane.

All configurations are carried out in the “Networking & Security tab”. The diagram below displays the logical switches and the Transport Zone they represent. ESXi hosts that can communicate with each other are said to be in the same transport zone. Transport zones control the domains of logical switches, which enables a logical switch to extend across distributed switches. Therefore, any ESXi host that is a member of that transport zone may have multiple VM’s part of that network.

image10

The management cluster below runs the vCenter server and the NSX controllers are deployment in the compute clusters. Each NSX manager should be connect to only one vCenter.The NSX Manager interface has a summary tab from a GUI and also from the Webclient. You may also SSH to it IP address.

The diagram show the version of NSX manager with ARP and PING tests.

image07

The next component is the NSX control plane that consists of controller nodes. There should be a minimum of three controller virtual machines. 3 controllers are used for high availability and all of theme are active at any given time.

The deployment of controllers is done via the Network & Security | Installation and Management Tab. From here click on the + symbol to add a Controller.

image06

For data plane components must be installed on a per-cluster basis. This involves preparing the ESXi host for data plane activity. This will enable the distributed firewall service on all host in the cluster. Any new hypervisors installed and added to the cluster will get automatically provisioned.

image01

Just as Cisco Nexus 1000v, once this NSX deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click.

The current NSX blueprint is already pretty big with multiple clusters but can also be easily expanded, similarly to how the vSphere 6.0 and Cisco Nexus 1K blueprint. Nodes can be added by saving the item to the library and inserting to the blueprint. There are many features to test with this blueprint including logical switching, firewalling, routing on edge service gateways, SSL and IPSEC VPN, data security and flow monitoring. With additional licences you can expand this blueprint to use third party appliances, such as Palo Alto.

The vSphere 6.0 and Cisco Nexus 1000v deployment can be easily expanded to a much larger scale. Additional ESXi hosts can be added by saving the item to the library and inserting to the blueprint. With this type of flexibility we can easily scale the blueprint and design multiple VEM and VSM. A fully distributed design will have multiple VEM’s installed. With additional licences you can insert other Cisco appliances that work with the Nexus 1000v. This may include the VSG or the vASA, allowing you to test service chaining and other advanced security features.

If you are interested in trying out this blueprint, or creating your VMware NSX or Cisco Nexus 1000v deployment from scratch, just open a Ravello trial account and send a note. You will be on your way to play with this fully functional VMware NSX or Cisco Nexus 1000v deployment within minutes, or build your very own deployment using Ravello’s Networking Smart Labs.

The post How to run VMware NSX and Cisco Nexus 1000v on AWS & Google Cloud appeared first on The Ravello Blog.

How to Configure Windows 2016 Containers on AWS or Google Cloud using Ravello

$
0
0

windows-server-2016

This blog shows how to install and create Windows containers on AWS or Google using Ravello, with an example. Ravello's nested virtualization technology allows you to deploy existing data center workloads on leading public clouds. Our earlier blogs show you how to install Windows XP, Windows 7 or Windows 8 on AWS or Google using an ISO. This blog post guides you step-by-step explaining how to install Windows 2016 on AWS/Google, how to optimize performance using the correct device drivers, how to install Windows Container role, configure networking and configure Windows containers.

A container is an isolated place where an application can run without affecting the rest of the system and without the system affecting the application. Because the container has everything it needs to run your application, they are very portable and can run on any machine that is running Windows Server 2016.

Install Windows 2016 on a Virtual Machine in Ravello

Log into your Ravello account, create a new application, find the provided empty VM in our Ravello library and drag and drop it onto the canvas.

image1.

Upload Windows 2016 installation ISO file into Ravello. Currently the latest Windows 2016 Technical Preview 4 (TP4) ISO can be downloaded here

This blog is based on Windows Server 2016 Essentials Technical Preview 4 (TP4)

Click on “Import VM” and follow the prompts to upload your ISO into Ravello Library. Here’s a quick guide to upload your ISO.

Assuming you have uploaded ISO into Ravello Library, you can go ahead and configure the VM to boot from CDROM attached to the ISO file. Click on the Disk properties of the VM, go to CDROM, browse through the library and choose the relevant Windows 2016 ISO. Make sure that ‘skip CD boot’ checkbox is unchecked when you are booting the VM for the 1-st time. After installation of the OS you can check ‘skip CD boot’ to insure that it always boots from the disk.

image2

Note: For better I/O performance we recommend changing Disk Controller type to LSI Logic SAS. Windows 2016 automatically detects the disk and installs the driver for LSI Logic SAS.

Open System properties of the VM and edit the number of virtual CPUs and memory size. We recommend at least 4 CPU and 8 GB RAM for Windows 2016 Servers running Windows containers. If you are planning to run many containers on this VM increase the memory size appropriately. In the General tab set the name of the VM and the hostname.

image3

If you want to be able to RDP into the VM go to Services tab and add a supplied service for RDP opening port 3389. Check ‘External’ checkbox if you want be able to RDP the VM from outside of the Ravello application.

Note: in order to be able to RDP the VM besides adding a supplied service for RDP/3389 connection you’ll also need enable RDP inside the VM after the OS is installed.

In this blog I am going to create an IIS container and redirect all HTTP traffic to it. Therefore, I need to add a supplied service for http traffic and open port 80.

Now let’s publish the application.

image4

Once the VM is published we can open up the console and go through the Windows 2016 installation. When you get to the screen asking you to select Operating System select the option with the Desktop Experience.

image5

Selecting this option will install GUI version of Windows 2016 Server. If you select the 1-st option, this will install a non-GUI Windows Core version and you’ll need to configure everything from a command line or PowerShell.

On the next screen select ‘Custom: Install Windows only (advanced)’

image6

After Windows OS is installed and you use the console to connect to the VM, In order to improve mouse behavior install VMware tools.

Install VMWare Tools on a Windows VM

You can download the latest version of VMWare tools for Windows.

Upload the ISO file using Ravello Import Tool to Ravello Library and when it is completed go back to your application and in the Disk tab attach CDROM to the uploaded VMWare tools ISO file. Make sure that ‘Skip CD boot’ option is unchecked.

image7

Save the changes and click on ‘Update’ button to apply the changes. Once the console button is activated open VM console, login as Administrator, browse CDROM and run setup64 to install VMWare Tools on the server. Use Typical installation option. Once installation of VMWare Tools is completed you are prompted to restart the server. After the server is rebooted and you connect again, you see that the mouse behavior is significantly improved.

Now to finalize and optimize the VM’s performance, it is recommended to use para-virtualized devices for the network card. Highlight the VM in Ravello canvas and in the Network properties change the device type from e1000 to VMXNet3.

image8

Save the changes and click on ‘Update’ button to apply the changes. The VM will restart again. From this point you can connect to console and continue working using console or enable RDP inside the VM and RDP to the VM using an external DNS name. You can get the external DNS name of the VM from the Summary tab

image9

The nest step is enabling Container role on the server and start installing Windows containers.

Setup/Installation of Windows Containers

Launch the Add Roles and Features from Windows Server Manager

image10
 
Continue through the wizard until you get to the Features section
 
Find Containers and select it
 
image11
 
Continue to click Next, and complete the installation of this feature
 
Restart the server after the feature is installed
 
Check to ensure the feature has been installed by running the following PowerShell command: Get-Command -Module Containers to see all of the commands available
 
image12

Installing Base OS Images

Container OS images can be found and installed using the ContainerProvider PowerShell module. Before using this module, it will need to be installed. Open PowerShell as Administrator and run the following commands to install the module.

PS C:\> Install-PackageProvider ContainerProvider –Force

Return a list of images from PowerShell OneGet package manager:

PS C:\> Find-ContainerImage
Name Version Description
NanoServer 10.0.10586.0 Container OS Image of Windows Server 2016 Techn...
WindowsServerCore 10.0.10586.0 Container OS Image of Windows Server 2016 Techn...

Return a list of images from PowerShell OneGet package manager:

PS C:\> Find-ContainerImage
Name Version Description
NanoServer 10.0.10586.0 Container OS Image of Windows Server 2016 Techn...
WindowsServerCore 10.0.10586.0 Container OS Image of Windows Server 2016 Techn...

To download and install the Windows Server Core OS image, run the following. The –version parameter is optional. Without a base OS image version specified, the latest version will be installed.

PS C:\> Install-ContainerImage -Name WindowsServerCore -Version 10.0.10586.0

Downloaded in 0 hours, 2 minutes, 28 seconds.

Verify that the images have been installed using the Get-ContainerImage command.

PS C:\> Get-ContainerImage
Name Publisher Version IsOSImage
WindowsServerCore CN=Microsoft 10.0.10586.0 True

Prior to creating a new container we need to configure networking. Windows containers function similarly to virtual machines in regards to networking. Each container has a virtual network adapter, which is connected to a virtual switch, over which inbound and outbound traffic is forwarded. Two types of network configuration are available.

  • Network Address Translation Mode – each container is connected to an internal virtual switch and will receive an internal IP address. A NAT configuration will translate this internal address to the external address of the container host.
  • Transparent Mode – each container is connected to an external virtual switch and will receive an IP Address from a DHCP server.

In this article I configure containers to use NAT mode. The container host has an 'external' IP address which is reachable on a network. All containers are assigned 'internal' address that cannot be accessed on a network. To make a container accessible in this configuration, an external port of the host is mapped to an internal port of port of the container. For more information how to configure container networking NAT and transport modes, please, read this link.

To create a new NAT enabled virtual switch with internal subnet 172.16.1.0/24 run the following command.

PS C:\> New-VMSwitch -Name "Virtual Switch" -SwitchType NAT -NATSubnetAddress "172.16.1.0/24"

Now we can create a Windows Server Container using the New-Container command. The below example creates a container named WindowsServerCoreDemo from the WindowsServerCore OS Image, and connects the container to a VM Switch named Virtual Switch.

PS C:\> New-Container -Name WindowsServerCoreDemo -ContainerImageName WindowsServerCore -SwitchName "Virtual Switch"
Name State Uptime ParentImageName
WindowsServerCoreDemo Off 00:00:00 WindowsServerCore

To visualize existing containers, use the Get-Container command.

PS C:\> Get-Container
Name State Uptime ParentImageName
WindowsServerCoreDemo Off 00:00:00 WindowsServerCore

Start the container using the Start-Container command.

PS C:\> Start-Container -Name WindowsServerCoreDemo

Connect to the container using the Enter-PSSession command. Notice that when the PowerShell session has been created with the container, the PowerShell prompt changes to reflect the container name.

PS C:\> Enter-PSSession -ContainerName WindowsServerCoreDemo -RunAsAdministrator

[WinCoreDemo]: PS C:\Windows\system32>

Create IIS Image

Now the container can be modified, and these modifications captured to create a new container image. For this example, IIS is installed.

To install the IIS role in the container, use the Install-WindowsFeature command.

[TP4Demo]: PS C:\> Install-WindowsFeature web-server
Success Restart Needed Exit Code Feature Result
True No Success {Common HTTP Features, Default Document, D...

When the IIS installation has completed, exit the container by typing exit. This returns the PowerShell session to that of the container host.

[WinCoreDemo]: PS C:\> exit
PS C:\>

Finally, stop the container using the Stop-Container command.

PS C:\> Stop-Container -Name WindowsServerCoreDemo

The state of this container can now be captured into a new container image. This example creates a new container image named WindowsServerCoreIIS, with a publisher of Demo, and a version 1.0.

PS C:\> New-ContainerImage -ContainerName WindowsServerCoreDemo -Name WindowsServerCoreIIS -Publisher Demo -Version 1.0
Name Publisher Version IsOSImage
WindowsServerCoreIIS CN=Demo 1.0.0.0 False

Now that the container has been captured into the new image, it is no longer needed. You may remove it using the Remove-Container command.

PS C:\> Remove-Container -Name WindowsServerCoreDemo -Force

Create IIS Container

Create a new container, this time from the WindowsServerCoreIIS container image.

PS C:\> New-Container -Name IIS -ContainerImageName WindowsServerCoreIIS -SwitchName "Virtual Switch"
Name State Uptime ParentImageName
IIS Off 00:00:00 WindowsServerCoreIIS

Start the container.

PS C:\> Start-Container -Name IIS

Configure Networking

The default network configuration for the Windows Container Quick Starts, is to have containers connected to a virtual switch configured with Network Address Translation (NAT). Because of this, in order to connect to an application running inside of a container, a port on the container host, needs to be mapped to a port on the container. For detailed information on container networking see Container Networking.

For this exercise, a website is hosted in IIS, running inside of a container. To access the website on port 80, map port 80 of the container hosts IP address, to port 80 of the containers IP address.

Run the following to return the IP address of the container.

PS C:\> Invoke-Command -ContainerName IIS {ipconfig}

Windows IP Configuration

Ethernet adapter vEthernet (Virtual Switch-2F7EC342-CC9A-4369-BB3E-507256F363A2-0):

Connection-specific DNS Suffix . : localdomain
Link-local IPv6 Address . . . . . : fe80::a5f0:9aca:a728:a332%19
IPv4 Address. . . . . . . . . . . : 172.16.1.2
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 172.16.1.1

To create the NAT port mapping, use the Add-NetNatStaticMapping command. The following example checks for an existing port mapping rule, and if one does not exist, creates it. Note, the -InternalIPAddress needs to match the IP address of the container.

if (!(Get-NetNatStaticMapping | where {$_.ExternalPort -eq 80})) {
Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 172.16.1.2 -InternalPort 80 -ExternalPort 80
}

When the port mapping has been created, you also need to configure an inbound firewall rule for the configured port. To do so for port 80, run the following script. Note, if you’ve created a NAT rule for an external port other then 80, the firewall rule needs to be created to match.

if (!(Get-NetFirewallRule | where {$_.Name -eq "TCP80"})) {
    New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True
}

Now that a container has been created from the IIS image, and networking configured, open up a browser and browse to the IP address of the container host, you should see the IIS home page. For example, I connect to external DNS name of the host running in my Ravello application and the host automatically translates it to internal IP address of IIS container. The response page that you see in the browser is sent by IIS server running in IIS container.

image13

This link explains how to create shared folders allowing data to be shared between a container host and container.

Windows Containers include the ability to manage how much CPU, disk IO, network and memory resources containers can consume. For details read this link.

Here is another useful link explaining container networking in details.

This is a technology blog. If you want to use Ravello to run Windows, you must comply with Microsoft's licensing policies and requirements. Please consult with your Microsoft representative.

The post How to Configure Windows 2016 Containers on AWS or Google Cloud using Ravello appeared first on The Ravello Blog.

Viewing all 333 articles
Browse latest View live