Quantcast
Channel: The Ravello Blog
Viewing all 333 articles
Browse latest View live

Ravello Meets vExperts at VMworld

$
0
0

ravello-logo

VMware is continuing its momentum and we are seeing lots of VMware labs in the form of VSAN labs, NSX labs, vRealize labs and more on Ravello. We are excited to be part of the virtualization ecosystem and we were back at VMworld this year. This VMworld was extra special though. Here are 3 things that made VMworld 2015 awesome for us:

  1. Winning Best of Show at VMworld 2015 - Thanks to all the support and love from our users, we won the coveted Best of Show award across all products in all categories for our new release InceptionSX - which runs nested ESXi on AWS/Google Cloud.

    Ravello wins VMworld 2015 Best of Show Award

  2. Being the first ever platform for cross-cloud vMotion - I really enjoyed Raghu’s keynote and vision of a unified hybrid cloud. My favorite part was the demo of the live vMotion from a VMware data center to vCloud Air. Particularly because Mike Preston had blogged about how he did a vMotion from AWS to Google Cloud using Ravello, and folks in the audience were kind enough to recognize Mike’s effort as the first ever cross-cloud vMotion in history.

    Scott Lowe Tweets about MW Preston, Ravello and vMotion

  3. Meeting vExperts and talking labs and insights on Ravello - Last but certainly a personal favorite was catching up with a lot of the vExperts who came by our booth and shared what VMware labs they are running on Ravello. Very grateful for all the insights - and excited to share it in the form of a small video collage below. Big thanks to VMware vExperts Chris Hildebrandt, Rick Schlander, James Brown, Tim Smith and Mike Preston for sharing their labs and ideas.

[video url="https://www.youtube.com/watch?v=tNyXQ-HztqI"]

And in case you are curious and want to try Ravello for yourself, here is a link to a free trial with 2880 CPU hours for free.

The post Ravello Meets vExperts at VMworld appeared first on The Ravello Blog.


Product Update: New Ephemeral Access Tokens and Updates to Application Auto-Atop

$
0
0

ravello-logo

Ephemeral Access Tokens and End-User Portal

We often see Ravello customers wanting to provide temporary access to an environment to someone outside their team. Consider the case of a product manager who wants to enable his partners with temporary test environments for testing some product integrations. Or consider the case of a training provider who wants to give all students in his class temporary access to their own labs on Ravello - but just for 1 week. Or perhaps an sales manager wants to provide temporary demo environments to his partner SEs. We have added ephemeral access tokens and a simple end-user view of the system to help with these use cases.

Ephemeral access allows you to provide limited, time-based access to a specific resource or set of resources to another person, without them being a part of your organization. This can be very useful when integrating your own portal on Ravello infrastructure or when you want to provide an external user with temporary access. For example, you can use these tokens to grant your partners limited access to a specific demo environment, without creating a user in your environment or for providing your users an extended demo environment for a limited period of time.

See here for more information on Ephemeral access tokens

However when we grant ephemeral access to a potential user or a student in a training lab, we don’t always want to make them enter the "Ravello world" - Canvas and network view, Library and VM properties configuration. In these cases we'd like to present our end users a more streamlined and clean environment where they can get an immediate impression of the application, its documentation and the proper ways to connect to each VM. In the Ravello end-user portal we achieve this exact purpose.

When you create a new Ephemeral Access Token for a single application, you are presented with one additional URL, which leads the user directly to the End-user portal.

Ephemeral Access Tokens

Using this URL, your end user can now get direct access to the End-User portal:

End-User portal

By editing the application documentation, you'll be able to add whatever information or instructions you'd like to present to your end users.

Application Auto-Stop Mechanism Changes

Ravello auto-stop functionality allows you to configure how long the application is run before it is automatically stopped, eliminating the need to manually stop those applications that you know are required for a limited time. When introducing the application scheduling abilities, we've changed the way the auto-stop mechanism is working, and it is now combined along with the application scheduling. This means that when setting a time for the application to be stopped will create a new scheduled "stop" task for this application. If you're not used to working with application scheduling, you should not see a big difference in behavior, however if you are the scheduling mechanism to automatically control your application stop and start time, Ravello will now provide you in details the relevant or additional tasks on your application that may interact with the current application auto-stop time.

image02

The post Product Update: New Ephemeral Access Tokens and Updates to Application Auto-Atop appeared first on The Ravello Blog.

Demonstrating SD-WAN with ease on AWS using Ravello

$
0
0

wan

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Software defined WAN (SD-WAN) has gained ground in recent years. SD-WAN technology brings to forefront many benefits - ranging from lower cost, increased flexibility to reduced complexity of the overall branch office network. In addition larger players such as Cisco – which launched its IWAN (Intelligent WAN) solution some years back and Citrix with its Cloud Bridge offering, this domain has seen many new entrants – CloudGenix, VeloCloud, Viptela, Talari Networks, Aryaka to name a few. These networking companies need an environments to demonstrate the value gained from SD-WAN, and Ravello’s Network Smart Labs offers the perfect environment to do so.

The following describes in detail how to setup Performance Routing (PfR) - one of the cornerstones of Cisco’s IWAN solution - to show SD-WAN in action. It also includes a base blueprint built with Cisco CSR1000v on Ravello Networking Smart Labs.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

WAN edge

The WAN edge is one of the most important functional blocks in the network. It is also one of the hardest areas to design. Traditional WAN’s are based around Border Gateway Protocol (BGP), which is used to peer with other BGP speakers in remote AS. BGP is a policy based routing protocol that allows you to tailor outbound traffic with a variety of metrics. It has proved to be the de facto WAN protocol and is great for reducing network complexity.

However, by default, BGP does not take into account transit performance or detect transitory failures. It misses the shape of the network and cannot dynamically adjust the routing table based on real-time events. To enhance performance, many WAN protocols are manually combined with additional mechanism such as IP SLA and Enhanced Object tracking but these add to configuration complexity. Also, traditional routing is destination based only, which prohibits any type of granular forwarding. All these factors have made the WAN edge a cumbersome module in the network. Flow and application awareness are needed to meet today's application requirements. We need additional insights into the protocols crossing the WAN edge in order to make intelligent routing decisions.

Performance Routing (PfR)

Performance Routing (PfR) formerly known as Optimized Edge Routing (OER) enhances the WAN and adjusts traffic flows based on real-time events. It adds intelligence to network and makes the WAN intelligent and dynamic. It doesn't replace classic IP routing, it augments it and adds application awareness. PfR can select an egress or ingress interface based upon unique characteristics like reachability, jitter, delay, MOS score, throughput and monetary cost. Pfr gains its intelligence by automatically collecting statistics with Cisco IP SLA for active monitoring, and NetFlow for passive monitoring. There is no need to manually configure Netflow or IP SLA, it is automatically implemented by the PfR network.

Link and path information is analysed by a central controller, known as PfR Master Controller (MC), a decision is made based on predefined policy and then an action is carried out by the local Border Edge routers (BR). The MC is where all the decisions are made. It's similar to an SDN controller but IOS based. It does not participate in any data plane forwarding, only control plane services, similar to how a BGP route reflector sits in the network. All policies are configured on the controller, such as preferred link and path parameters. It gathers information from the BR edge nodes and determines whether or not traffic classes are in or out of policy. If traffic is not in policy, it can instruct the BR to carry out route injection or dynamic PBR injection and use an alternative path.

The PfR BR sits within the data plane and participates in traffic forwarding. It is an edge router with one or more exit links to an ISP. The MC doesn't make any changes, it is the BR that actually implements the enforcements. A BR can be enabled on the same router as a MC or it can be separate. All information between the MC and BR is protected with key chains.

Pfr is a useful tool to have in any network. It has a observe mode, which lets the Pfr nodes analyse path and link characteristics and report back for analysis. There is also a route control mode. If the controller determines there is an out of policy event, it can influence the routing table to a more preferred path.

IWAN Lab Setup on Ravello

The Ravello Lab consists of two LAN networks separated by a Core. There is a jump host that has access to all nodes and it's here that external connectivity is permitted.

image00

On LAN1 we have 2 x BR and 1 x MC. The MC and the BR device functionality are combined on BR2. Each BR has two uplink to either core nodes, SP1 and SP2. OSPF is running internally and redistribute connected subnets is used for transit link reachability.

On LAN2 we have a single BR and MC component on BR3. OSPF is also running in the Internal LAN, redistributing connected subnets for transit link reachability.

There are a number of test networks on SP1 and SP2. 12.0.0.1/24 & 13.0.0.1/24 on SP1. 14.0.0.1/24 & 15.0.0.1/24 on SP2. These networks are pingable from the LAN routers and can be used for reachability and performance testing.

Configuring the Nodes

The first thing to do is set up a keychain so the BR and MC devices can communicate. All communication between the BR and MC is protected. The authentication key must be configured on both the Master Controller and the Border Router.

key chain PFR
 key 1
   key-string CISCO

A PFR network must have at least two exit interface and these must be explicitly configured on the MC. Logging is also turned on.

On BR1 interfaces GigabitEthernet3 and GigabitEthernet4 directly connect to SP1 and are specified as external.

pfr master
 logging

 border 150.1.3.3 key-chain PFR
  interface GigabitEthernet4 external
  interface GigabitEthernet3 external
  interface GigabitEthernet1 internal

On BR2 interfaces GigabitEthernet3 and GigabitEthernet4 directly connect to SP2 and are specified as external.

 border 150.1.4.4 key-chain PFR
  interface GigabitEthernet1 internal
  interface GigabitEthernet3 external
  interface GigabitEthernet4 external

On BR3 interfaces GigabitEthernet1 and GigabitEthernet2 directly connect to SP1 and SP2 and are specified as external.

 border 150.1.5.5 key-chain PFR
  interface GigabitEthernet1 external
  interface GigabitEthernet2 external
  interface GigabitEthernet4 internal

Once completed, you setup the BR functionality on BR1, BR2 and BR3. The loopback addresses are reachable to the internal LAN of each node.

pfr border
 local Loopback0
 master 150.1.X.X key-chain PFR

The following command show pft master displays the status of the BR connectivity and also the default settings. Notice that the default mode is mode route control

image02

Both LAN routers have reachability to test prefixes 12.0.0.1 - 15.0.0.0.1. Use these endpoints to test PFR functionally. As a test under the pft config change the external interface to max-xmit-utilization absolute 1

 border 150.1.4.4 key-chain PFR
  interface GigabitEthernet4 external
   max-xmit-utilization absolute 1
  interface GigabitEthernet3 external
   max-xmit-utilization absolute 1
  interface GigabitEthernet1 internal

Send large packet from LAN1 to 14.0.0.1 and telnet to the prefix from a different host. The IP 14.0.0.1 is on SP2. The large PINGS trigger the out of policy event and the telnet triggers the Netflow. You will notice that the prefix 14.0.0.1 is now out of policy.

image01

The complete configurations for this setup is available on GitHub.

Conclusion

Ravello Network Smart Labs offers a unique way for SD-WAN solution providers and their ecosystem of resellers and trainers to show their technology in action. Interested in playing with this blueprint? Just open a Ravello account and add this blueprint to your library.

The post Demonstrating SD-WAN with ease on AWS using Ravello appeared first on The Ravello Blog.

Public cloud – a new playground for cyber ninjas?

$
0
0

cyberrange-planet

With multiple security breaches reported over the last couple of years (Target, Home Depot, Sony Pictures to name a few), enterprises today are scrambling to ‘beef-up’ their cyber security. They are investing to get the best-of-breed intrusion prevention & breach detection tools and securing their network through next generation firewall technologies & endpoint security solutions. Many are turning to innovative technologies such as big data analytics, cloud-enabled cybersecurity and advanced authentication to reduce cyber-risks to their environment.

While having the right tools and technology is necessary, it is equally important to have a trained workforce that can make use of it to keep the enterprises safe from cyber attacks. And enterprises recognize that – in a recent ‘Global State of Cyber Security’ survey by PwC, 27% of respondents identified employee security awareness training as one of the top investments areas for their organization.

Screen Shot 2015-11-02 at 11.01.45 AM

Global State of Cyber Security Survey 2015 - PwC

Cyber ranges are a great way to provide this training where trainees can practice and hone their skills in a real-world setting. However, given the amount of resources needed to stand-up a cyber range mimicking a real-life environment to scale, traditionally they have been cost-prohibitive. With Ravello’s nested virtualization and networking overlay technology, it is now possible to use the public cloud to quickly stand up cyber ranges with various simulation challenges complete with actual appliances & network traffic mimicking real world scenarios – to scale, on-demand and in a cost effective fashion. Public cloud really is the new playground for the cyber ninjas.

 

To learn how SimSpace used Ravello to stand-up full-featured cyber ranges on AWS, join this webinar where Lee Rossey, SimSpace CTO will describe how he created Virtual Clone Network using Ravello’s cloud based platform.

The post Public cloud – a new playground for cyber ninjas? appeared first on The Ravello Blog.

How to install and configure vRealize Automation (vRA) and test your orchestration scripts – Part 1

$
0
0

itq

Whether you are installing and configuring VMware’s vRealize Automation (vRA) for the first time or need a lab to test your automation and orchestration scripts, you will find this step by step guide useful. Instead of relying on spare hardware, I will be deploying this in a Ravello lab which runs on AWS/Google Cloud. Since I can install ESXi on Ravello, I’ll be treating it just like my data center - so the steps will be similar after that. On a side note, you might want to refer to our previous posts about setting up labs for VSAN, NSX or just vCenter on Ravello and see what the VMware community is saying about it.

You can use this guide to try out the vRA product, test drive upgrade scenarios, test new features or develop and test new customizations without requiring the resources of a physical environment.

vRealize Automation can be deployed in a multitude of ways, but for this setup we’ll try to keep the deployment as simple as possible, without any failover or redundancy capatibilities. After setting up the simple deployment of vRealize automation, configuring a highly available setup is left as an exercise to the reader.

For the deployment of vRealize automation we’ll need the following components:

  • Windows domain Controller or LDAP server.
  • Windows vRealize IAAS & SQL (Express) server.
  • vRealize virtual appliance.
  • Optional: vRealize identity appliance.
  • Optional: vRealize orchestrator appliance.
  • Optional: vCenter server – This can be either the vCenter appliance or the windows virtual machine, in which case you could use the machine already provisioned for active directory.
  • Optional: 1 or more ESXi hosts.

Currently, my vRealize lab in Ravello looks as follows. It contains the vRealize appliance, an identity appliance, an IAAS server, a domain controller and an orchestrator appliance. It also includes a windows vCenter server and two ESXi hosts to test the deployment of virtual machines. This however is completely optional and not required for the deployment of vRealize Automation.

vRealize automation can be used for a multitude of tasks. One of these is the deployment of virtual machines on vSphere, vCloud Director, Openstack, Amazon Web Services or a variety of other public or private cloud providers. For this you'll need some kind of cloud provider or virtualization platform. The other functionality is the provisioning of advanced services through a workflow engine called vRrealize Orchestrator. This allows you to provision miscellaneous services through tools such as Powershell, bash, REST API's or a multitude of plugins available for various products.

You can test both of these features in a Ravello Lab. Ravello nested virtualization with hardware acceleration capability enables you to run Openstack and ESXi environments on AWS and Google Cloud. Also, you can run Exchange and other Windows, Linux systems as VMs to test out vRA orchestration capabilities.

image03

Deployment

This deployment presumes that you already have a vCenter server running. If you are not using vCenter in this lab, you’ll have to deploy the identity appliance.

Pre-deployment notes

For all Linux based appliances we’ll need to change the compliance check to make sure the appliance boots automatically on Ravello. This can be done in the following way:

  • Login to the appliance using ssh or the console
  • Run vi /etc/init.d/boot.compliance
  • Change line 47 – (add “-q”)
    • From MSG=`/usr/bin/isCompliant`
    • To MSG=`/usr/bin/isCompliant -q`
  • Change line 48 – substitute (“0” instead of “$?”)
    • From CODE=$?
    • To CODE=0
  • Save the changes you made in /etc/init.d/boot.compliance.

In addition, all appliances and servers should be pointing to the same NTP time source. In virtual machines, this can be configured through the OS settings, in virtual appliances this can be configured through the port 5480 VAMI interface in admin -> time settings.

vRealize Identity appliance

To deploy the identity appliance, you’ll have to convert it from stream optimized to non stream optimized first before it can be uploaded to the Ravello content library. A detailed procedure on how to do this can be found on here.

After the appliance has been deployed, log in to the console with root and a blank password and change the password by running passwd. Then, run /opt/vmware/share/vami/vami_config_net to configure your network. Lastly, configure a service in ravello on the identity appliance to open port 5480. After doing this, you can log in to the vami interface through https://your-public-ip:5480 to configure the rest of the identity appliance.

Open the SSO tab and enter a domain and password.

image06

Move on to the “Host Settings” and enter a SSO hostname here. Keep in mind that this name should be the same as the hostname registred in either Ravello DNS or your AD DNS.

image02

Open the SSL Tab and either select “Generate a self-signed certificate” or “Import a PEM encoded certificate” if you have your own SSL certificate. Enter your certificate details and apply. After a short while the certificate will be generated. Of not here is that the common name should match the SSO Hostname you entered earlier.

image00

Lastly, if you have active directory, open the Active Directory tab and enter your domain information. This is not a required step since you can configure AD authentication in vRealize automation afterwards.

image04

vRealize Appliance

After your identity appliance is configured, move on to the vRealize Automation appliance. This can be downloaded from the VMware site as an OVA file. Rename the ova file’s extension to .zip and extract the OVF, which you can then upload to the Ravello content library.

After powering on the appliance, we’ll have to configure it. Log in to the console with root and a blank password and change the password by running passwd. Then, run /opt/vmware/share/vami/vami_config_net to configure your network. Lastly, configure a service in ravello on the identity appliance to open port 5480. After doing this, you can log in to the vami interface through https://your-public-ip:5480 to configure the rest of the vRA appliance.

Open the vRA settings tab and configure the host settings. Select the “Update host” option and enter the hostname. Personally, I prefer to set this to an external DNS name if you will be accessing your lab environment from outside. This can be either the DNS name ravello gave you (can be found in the summary of the virtual machine) or a CNAME record pointing to your ravello DNS name or IP.

Select “Generate Certificate” or “Import” depending on whether you have a presigned certificate or not. Keep in mind that the common name should exactly match the hostname you entered above.

image08

The process to activate this can take a few minutes, so take a coffee break, and after the service is configured move on to the SSO tab.

Enter your SSO host here. Depending on whether or not you chose to use an identity appliance or not, this should be either your identity appliance’s hostname (the same as you configured in the appliance) or your vCenter hostname. Port should be 7444 for vCenter 5.5 or the identity appliance and 443 if you are running vCenter 6.

image09

Enter your administrator user, default tenant (depending on what you configured in vCenter or the identity appliance, administrator and vsphere.local by default) and your password.

After waiting for a few minutes, SSO should return an OK status and will have been configured.

Move on to the licensing tab and enter your license code. This is required to even run vRealize automation, but you should be able to get a trial license.
Open the “IaaS install” page and download the IaaS installer to your vRealize automation server. Leave the rest of the settings default.

vRealize IAAS

This part presumes that you have installed SQL express or SQL server already. If you haven’t done so yet, install this service first before proceeding.
Start by downloading the vRealize automation prereq script. Run the script and follow the instructions, after which your server should be correctly configured to install vRealize automation in the easiest way possible.

After preparing the server, start the installer you downloaded from the vRA appliance earlier. Enter the credentials for your vRA appliance (the root/password you set earlier). Then, ensure that all the prerequisites are met. If the prerequisite checks complain about the windows firewall and you’ve ensured that the firewall is either off or the ports are opened correctly, select “ByPass” to ignore these checks.

image01

Enter a password for your user account, a decryption key for the database, and your SQL server.

When configuring the DEM worker, select the “Install and configure vSphere agent” and note down the values of the vSphere agent name and the Endpoint name, since you’ll need those when adding a vSphere backend. I usually name my vSphere agent the same as the FQDN of my vCenter server, but you can call it anything you want, as long as the name of your endpoint configurd in vRealize Automation is the same.

image05

On the component registry page, click load at the default tenant to load the tenant information. Download the certificate and select “Accept Certificate”. Enter your SSO credentials (default administrator@vsphere.local) and click Test. Then, enter the hostname of your IAAS server (this needs to be DNS resolvable) and click Test.

image07

After all these steps have been performed, your installation starts and after about 10-15 minutes you should have a working vRealize automation setup. Starting the services initially can take quite a bit of time, so some patience is required, but after 10-15 minutes you should be able to log in to the vRealize Appliance interface on https://your-vra-hostname/vcac. If you’ve forwarded port 443 to the vRA appliance (not the IAAS server) the console will be accessible through https://your-vra-public-hostname/vcac.

This concludes the initial setup of vRealize Automation environment. In the next part we'll continue with the vRealize Automation configuration and deployment of virtual machines on various cloud platforms.

The post How to install and configure vRealize Automation (vRA) and test your orchestration scripts – Part 1 appeared first on The Ravello Blog.

Malware analysis using REMnux on public cloud

$
0
0

REMnux-logo-1

Calling all malware analysts! We are proud to share that REMnux is now available on Ravello Repo. Using Ravello’s nested virtualization and networking overlay technology, it is now possible to run REMnux in an isolated sandbox environment on public cloud.

For the uninitiated, REMnux is a Linux toolkit for helping malware analysts with reverse engineering malicious software. At the heart of this toolkit is REMnux Linux distribution based on Ubuntu. REMnux incorporates many tools for analyzing Windows and Linux malware, examining browser-based threats such as obfuscated JavaScript, exploring suspicious document files and taking apart other malicious artifacts. Using REMnux, forensic investigators and incident responders can intercept suspicious network traffic in an isolated lab when performing behavioral malware analysis.  

To run REMnux, please open a Ravello trial account, and add the REMnux blueprint to your library. Read more on REMnux blog on how to get your REMnux environment running on Ravello.

The post Malware analysis using REMnux on public cloud appeared first on The Ravello Blog.

SimSpace hosts its CyberVision event on Ravello

$
0
0

11082542_1569266606670095_6650509549320835331_n

We are excited to share that Massachusetts Executive Office of Public Safety and Security will be hosting Boston’s first-ever CyberVision live-adversary cybersecurity workshop with live cyber ranges built by SimSpace running on AWS & Google using Ravello's nested virtualization and networking overlay technology.

CyberVision is a hands-on keyboard simulated attack on a city’s critical infrastructure – only in this event the defenders and attackers actively engage on another in a high fidelity copy of real environment without risking the production infrastructure. The New England cyber-attack scenario will draw on talent and experience from critical infrastructure operators, security practitioners, cyber innovators, academics, and government personnel. The cyber range developed by SimSpace and running on Ravello emulates several Massachusetts digital enterprises using the virtual clone network technology to create a high fidelity copy of the environment cyber defenders and operators use in their real network.

Interested in attending Cyber Vision workshop - please register at CyberVision Boston. Interested in trying Ravello? Please open a Ravello trial account.

The post SimSpace hosts its CyberVision event on Ravello appeared first on The Ravello Blog.

How to integrate Brocade SDN Controller with OpenStack on AWS & Google

$
0
0

AAEAAQAAAAAAAAKoAAAAJDg2N2FiNDQwLWEzZjgtNGI5OC1hMWUwLWI2ZjhmMjE2NGVlNA

Brocade’s SDN Controller and OpenDaylight controller are excellent options for companies that are looking to bring virtual network services on OpenStack. In the past year, Brocade has made significant investments to improve the integration between OpenDaylight and Openstack  – including more complete interface to Neutron and OVS-DB and touching on policy, topology, provisioning, additional south-bound plugins.

[caption id="attachment_6044" align="aligncenter" width="351"]Brocade's SDN Controller with OpensStack Brocade's SDN Controller with OpensStack[/caption]

Ravello with its nested virtualization and networking overlay serves as an excellent platform for modeling, building and testing Network Function Virtualization (NFV) and SDN topologies – such as orchestration of network services using OpenDaylight on OpenStack Neutron – before production roll-outs. Network ISVs and enterprises alike can use Ravello to jump-start their NFV PoCs and deployments by accessing data-center networking on AWS & Google cloud without having to wait for hardware resources, and incurring CapEx.

Alec Rooney from Elbrys Networks has written a detailed article on how to setup a fully-functional Brocade SDN / OpenDaylight Controller integrated with OpenStack on Ravello and he has a video to walk you through the steps.

 

[embed]https://youtu.be/tXw4W3RQDMM[/embed]

Interested in setting up your very own OpenDaylight OpenStack integration? Just open a Ravello account and follow the instructions.

 

The post How to integrate Brocade SDN Controller with OpenStack on AWS & Google appeared first on The Ravello Blog.


Non-dummies guide to nested ESXi lab on Ravello

$
0
0

ESXi

Ravello’s nested ESXi offering has been out for quite some time. With more and more users and use cases, and advanced setups created on a regular basis - we wanted to make sure you know where to find guides and tools to help you quickly run your VMware vSphere/ESXi lab on Ravello.

Before you get started: do you really need to install ESXi?

The first thing to do before you get started is make sure you really need to run ESXi in the cloud. VMware applications can, and have been, running successfully natively on Ravello’s HVX right from the start: running SharePoint environments, SAP, .NET and other enterprise applications, and even virtual networking and security appliances usually should be done natively on Ravello. We previously published a blog to help you figure out whether you can run your VMware application on Ravello’s HVX or whether you need to install the ESXi hypervisor. If you need help - feel free to email us with your use case.

[column size="1/2"][caption id="attachment_6057" align="aligncenter" width="612"]Run native on Ravello Run native on Ravello[/caption][/column][column size="1/2 last"][caption id="attachment_6058" align="aligncenter" width="847"]Run the nested ESXi hypervisor on HVX Run the nested ESXi hypervisor on HVX[/caption][/column]

Nested ESXi: setting up

If indeed you determined running the hypervisor is required - we’ve provided a few how to guides to set up your basic lab and get going:

  1. Install and configure ESXi on the public cloud: upload ESXi ISO to Ravello, install ESXi, configure ESXi and save your ESXi to your Ravello VM library.
  2. Install and configure VMware vCenter 5.5 server on the public cloud: upload vCenter Server appliance to Ravello, create vCenter VM in Ravello, configure it to run on Ravello, save it to your VM library.
  3. Set up a full VMware datacenter in a public cloud: create a data center, configure ESXi hosts to use NFS, create VMs to run on VMware cluster, set the VMs start and shutdown order, save application blueprint

Advanced set ups: VPNs, NFS, DHCP for 2nd level guest and more

Now that you’ve got your basic setup going, you will probably want to add some more advanced elements to your lab environment. Here are a few step-by-step guides to start with:

  1. Build simple shared storage using an NFS server: simply install and configure your NFS server and save it to your VM library.
  2. Setup DHCP for 2nd level guests running on ESXi: since Ravello is actually unaware of the 2nd level guests running on your ESXi hypervisor, those guests cannot reach Ravello’s DHCP server by default. Here you’ll learn how to define the networking in Ravello and your vSphere environment to support another DHCP server and install and configure your own 2nd level guest DHCP server VM to service the other guests.
  3. Setup a VPN connection to environment running in Ravello from a vanilla pfSense image: step by step to guide you through the scenario where one environment in running in the cloud with Ravello, and another ban be in an on-premise data center, or a in a VPC in AWS. etc.
  4. Build a 250 node VMware vSphere/ESXi lab environment in AWS for testing: this large scale ESXi data center in AWS, which costs less than $250/hr, is a guide useful for enterprises for upgrade testing their VMware vSphere environments of roe new product and feature testing.

Additional VMware products how-tos

  • Install and run VSAN 6.1 environment on AWS or Google Cloud: we created this guide to facilitate testing out new features and showcasing storage management products working with this VSAN release. We walk you through configuring your VSAN environment and saving the setup as a blueprint in your Ravello library. This is very useful, for example, for demo and POC environments that can be provisioned in minutes.
  • Install VMware NSX 6.2: Software defined networking is an essential component of the software defined data-center. While installing NSX on a “normal” platform can be resource-intensive and time consuming, it is valuable as it enable you to virtualize your networking infrastructure. Learn how by provisioning NSX on Ravello, you can install it once, and re-deploy any time, greatly reducing the time required for the setup of a new testing, demo or PoC environment.
  • Install and configure vRealize Automation and test orchestration scripts: A simple deployment to try out vRA, test upgrade scenarios and new features, develop new customizations and more. The setup contains: the vRealize appliance, an identity appliance, an IAAS server, a domain controller and an orchestrator appliance, and as an option - a windows vCenter server and two ESXi hosts to test the deployment of virtual machines.

I hope this brief description of the guides we’ve put together will help you quickly find your way to what you’re looking to run on Ravello. Feel free to comment here with other products you’d like to have guides to, or tell about the setups you’ve build in your lab.

The post Non-dummies guide to nested ESXi lab on Ravello appeared first on The Ravello Blog.

How to model and test NFV deployments on AWS & Google Cloud

$
0
0

Advanced Enterprise Networking In AWS EC2 - A Hands On Guide

Author:
Hemed GurAry, CISSP and CISA, Amdocs
Hemed GurAry is a Cloud and Security architect with Amdocs. Hemed specializes in network and application architecture for Finance and Telcos, bringing experience as a PMO and a leading team member in high key projects. His ongoing passion is hacking new technologies.

Network Function Virtualization has taken the networking world by storm. It brings to the table many benefits such as cost savings, network programmability and standardization to name a few.

Ravello with its nested virtualization, software defined networking overlay and an easy to use ‘drag and drop’ platform offers a quick way to set up these environments. With Ravello being a cloud based platform, it is available on-demand and opens up the opportunity to build sophisticated deployments without having to invest time and money to create a NFVI from scratch.

This three-part series blog post will walk you through the instructions on how to build using Ravello a complete NFV deployment with a working vFW service chain onboard. The deployment will based on Juniper Contrail and Openstack comprising of three nodes. We will start with this part by installing and configuring the NFV setup.

Deployment Architecture

VMs

Start with three empty virtual servers, each server has the following properties: 4 CPU’s, 32GB of Memory, 128GB of Storage and one network interface.

Deployment Architecture - VMs

Note: It’s important to define a hostname and use static IP’s for each server to preserve the setup’s state.

Software

The following software packages are used in this tutorial:

  • Ubuntu Precise Pangolin Minimal Server 12.04.3
  • Juniper Contrail release 2.01 build 41 + Openstack Icehouse
  • Cirros 0.3.4

Network

The three virtual servers running on Ravello are connected to our underlay network, CIDR: 10.0.0.0/24.

Three overlay networks were configured on our contrail WebUI:

  • Management – 192.168.100.0/24
  • Left – 10.16.1.0/24
  • Right – 10.26.1.0/24

image03

Configuration Steps

Below are step-by-step instructions on how to configure the setup:

  1. Setup VM’s and install the operating system
  2. Download Contrail packages and install controller node
  3. Fabric testbed.py population
  4. Install packages on compute nodes and provision setup
  5. Setup self-test

Step 1: Setup VM’s and install the operating system

We will start with configuring the Ravello application, setup the VM’s and install the operating system on each VM. This guide will focus on elements specific to Contrail so if you don’t know on how to build a Ravello application please refer to Ravello User Guide first.

It is also assumed you are able to install Ubuntu on the servers, either by installing or by using a preconfigured image. We installed an empty Ubuntu 12.04.3 on an empty Ravello image and then reused a snapshot.

The following properties are the same for all the VM’s.

  • CPUs: 4
  • Mem Size: 32GB
  • Display: VMware SVGA
  • Allow Nested Virtualization: Yes
  • Disk: hda
  • >Disk Size: 128GB
  • Controller: VirtIO
  • Network Name: eth0
  • Network Device: VirtIO
  • User: root
  • Password: Adm1n2

These are individual properties of the three VM’s:

Host IP Supplied services Role
CP99 10.0.0.40 22, 8080, 80, 443, 8143 Controller
compute1 10.0.0.41 22 Compute node
compute2 10.0.0.42 22 Compute node
  1. Setup the three servers and install the operating system with OpenSSH role.
  2. Update /etc/hostname file with server’s hostname
  3. Update /etc/hosts file to contain the following
    127.0.0.1	localhost	
    10.0.0.40	CP99
    10.0.0.41	compute1
    10.0.0.42	compute2
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
  4. Update /etc/network/interfaces file
    # The primary network interface
    auto eth0
    iface eth0 inet static
    address 
    netmask 255.255.255.0
    gateway 10.0.0.1
    dns-nameservers 8.8.8.8 8.8.4.4
  5. Last, validate the installation by going over the following checklist:
    • Validate SSH connectivity from your workstation
    • Validate that all of the servers are time synced.
    • Validate all servers can ping from one to another(Use hostnames to validate host names are resolvable)
    • Validate All servers can ssh and scp between one another.

Step 2: Download Contrail packages and install controller node

There are three methods to get the Contrail packages:

  • Build Open Contrail packages from source
  • Download pre-built Open Contrail packages
  • Download pre-built Contrail packages

For our guide we will use the later.

Note: This procedure is specific to installing contrail 2.0X on Ubuntu 12.04.3 and includes a kernel upgrade to kernel 3.13.0-34.

  1. Head on to Contrail’s download page and download the application package
    contrail-install-packages_2.01-41-icehouse_all.deb

    Contrail’s download page

  2. Copy the application package file to CP99 /tmp/ folder
    scp /tmp/contrail-install-packages_2.01-41-icehouse_all.deb root@:/tmp
  3. SSH to CP99 and install the package
    dpkg -i /tmp/contrail-install-packages_2.01-41-icehouse_all.deb
  4. Run the following command to create a local Contrail repository and fabric utilities at /opt/contrail/
    cd /opt/contrail/contrail_packages;   ./setup.sh

Step 3: Fabric testbed.py population

Create a Fabric’s testbed.py file with the relevant configuration:

  1. Create testbed.py using nano editor
    nano /opt/contrail/utils/fabfile/testbeds/testbed.py
  2. Paste the following block of text to the nano editor and save the file
    from fabric.api import env
    #Management ip addresses of hosts in the cluster
    host1 = 'root@10.0.0.40'
    host2 = 'root@10.0.0.41'
    host3 = 'root@10.0.0.42'
    #External routers if any
    #for eg.
    #ext_routers = [('mx1', '10.204.216.253')]
    ext_routers = []
    
    #Autonomous system number
    #router_asn = 64512
    router_asn = 64512
    
    #Host from which the fab commands are triggered to install and provision
    host_build = 'root@10.0.0.40'
    
    #Role definition of the hosts.
    env.roledefs = {
        'all': [host1, host2, host3],
        'cfgm': [host1],
        'openstack': [host1],
        'control': [host1],
        'compute': [host2, host3],
        'collector': [host1],
        'webui': [host1],
        'database': [host1],
        'build': [host_build],
    }
    
    #Openstack admin password
    env.openstack_admin_password = 'Adm1n2'
    
    #Hostnames
    env.hostnames = {
        'all': ['CP99', 'compute1', 'compute2']
    }
    
    env.password = 'Adm1n2'
    #Passwords of each host
    env.passwords = {
        host1: 'Adm1n2',
        host2: 'Adm1n2',
        host3: 'Adm1n2',
    
        host_build: 'Adm1n2',
    }
    
    #For reimage purpose
    env.ostypes = {
        host1:'ubuntu',
    	host2:'ubuntu',
    	host3:'ubuntu',
    }

Step 4: Install packages on compute nodes and provision setup

Use fabric to install packages on compute nodes, upgrade the linux kernel and provision the whole cluster:

  1. Issue the following command from the controller
    /opt/contrail/utils; fab install_pkg_all:/tmp/contrail-install-packages_2.01-41-icehouse_all.deb
  2. Upgrade Ubuntu kernel
    fab upgrade_kernel_all
  3. Perform installation
    fab install_contrail
  4. Provision cluster
    fab setup_all

Step 5: Setup self-test

To finalize part one we will perfom three methods to test the health of our new setup:

  • Contrail’s status commands
  • Horizon login test
  • Contrail webgui monitor
  1. To get Contrail’s status run the following command from the controller: contrail-status

    Note: Allow up to 10 minutes for the whole system to spin up
    Expected output:
    == Contrail Control ==
    supervisor-control:           active
    contrail-control              active
    contrail-control-nodemgr      active
    contrail-dns                  active
    contrail-named                active
    
    == Contrail Analytics ==
    supervisor-analytics:         active
    contrail-analytics-api        active
    contrail-analytics-nodemgr    active
    contrail-collector            active
    contrail-query-engine         active
    
    == Contrail Config ==
    supervisor-config:            active
    contrail-api:0                active
    contrail-config-nodemgr       active
    contrail-discovery:0          active
    contrail-schema               active
    contrail-svc-monitor          active
    ifmap                         active
    
    == Contrail Web UI ==
    supervisor-webui:             active
    contrail-webui                active
    contrail-webui-middleware     active
    
    == Contrail Database ==
    supervisor-database:          active
    contrail-database             active
    contrail-database-nodemgr     active
    
    == Contrail Support Services ==
    supervisor-support-service:   active
    rabbitmq-server               active
  2. Next run the openstack-status command
    openstack-status
    Expected output:
    == Nova services ==
    openstack-nova-api:           active
    openstack-nova-compute:       inactive (disabled on boot)
    openstack-nova-network:       inactive (disabled on boot)
    openstack-nova-scheduler:     active
    openstack-nova-volume:        inactive (disabled on boot)
    openstack-nova-conductor:     active
    == Glance services ==
    openstack-glance-api:         active
    openstack-glance-registry:    active
    == Keystone service ==
    openstack-keystone:           active
    == Cinder services ==
    openstack-cinder-api:         active
    openstack-cinder-scheduler:   active
    openstack-cinder-volume:      inactive (disabled on boot)
    == Support services ==
    mysql:                        inactive (disabled on boot)
    rabbitmq-server:              active
    memcached:                    inactive (disabled on boot)
    == Keystone users ==
    Warning keystonerc not sourced
  3. Login to horizon by browsing to the following URL
    http:///horizon
    1. Use the credentials we set earlier in testbed.py: u:admin/p:Adm1n2
      image05
  4. Log in to Contrail’s webgui by browsing to the following URL
    http://:8080
    1. Use the credentials we set earlier in testbed.py: u:admin/p:Adm1n2
      image04
    2. Review monitor to check for system alerts

Summary

At this stage you should have a working multi node setup of contrail with two compute nodes where you can experience many if not most of Contrail’s functionality. Tune in for the next blog posts explaining:

  • How to functionally test the setup
  • How to install a simple gateway
  • How to configure a vFW service chain

I would like to thank Igor Shakhman from my group in Amdocs and Chen Nisnkorn from Ravello Systems for collaborating with me on this project.

The post How to model and test NFV deployments on AWS & Google Cloud appeared first on The Ravello Blog.

How to setup your ESXi lab for upgrading from VMware vSphere 5.5 to 6.0

$
0
0

ITQ

With the new release of VMware vSphere 6.0, many organizations are thinking about upgrading from existing 5.5 to 6.0 version. However, upgrading multi host ESXi environments running production systems is not an easy task. Most IT administrators would like to perform upgrade in a controlled lab environments so they can practice the upgrade steps, create run book and then do the actual upgrade in their data center environments. The challenge is that it takes a long time to procure hardware and setup isolated multi host ESXi environments, which can be used a test labs to perform upgrades. Ravello Systems allows you to run nested ESXi on public clouds AWS and Google Cloud. In this blog, we will describe how you can practice upgrade from 5.5 to 6.0 in ESXi lab environments created on public clouds.

VMware vSphere can be deployed with either a windows based vCenter or the Linux-based VMware vCenter Server Appliance (VCSA). In this document we'll discuss the upgrade procedure from both platforms to both other platforms, and discuss the advantages and disadvantages of both. In addition, we'll talk about the ways to easily upgrade ESXi from 5.5 to 6.0 and when to use which method.

Let’s assume that our existing ESXi 5.5 environment which is to be upgraded to 6.0 consists of following components:

  • One preinstalled ESXi host to host the vCenter 6 VM with the following specs:
    • 2 CPU
    • 12GB Memory
    • 150GB Storage
    • For more information on hardware requirements, see here.
  • Any other ESXi hosts - if you are already running these in your current lab environment or wish to install or upgrade additional ESXi hosts.
  • One preinstalled vCenter 5.5 server
    • This can be either running on windows or the VCSA.
    • This VM can be running directly on Ravello for the windows vCenter. If you are running the vCSA it should be running on a nested ESXi and you should keep in mind the additional resources required to host the new VCSA.
  • Fully resolvable DNS, both forward and reverse. Keep in mind that the Ravello DNS currently does not perform reverse lookups and as such you'll need an external DNS server such as Microsoft AD DNS or a Linux-based DNS server such as Bind or PowerDNS

First, you will setup ESXi 5.5 environment which mirrors your existing 5.5 setup in your data center. Follow the instructions in this blog to setup this environment.

Then, execute a test upgrade in this isolated 5.5 environment replica on Ravello, document the steps and you acn then run them in your Data center for the actual upgrade.

image11

Upgrading Windows-based vCenter

Upgrading the Windows-based vCenter from 5.5 is a relatively simple process. First, we'll have to start with the requirements before we can start the upgrade:

  • Ensure your operating system is compatible. Any windows server version from 2008R2 onwards is supported (up to 2012R2).
  • Verify that the hostname of your vCenter server is resolvable, both forwards and backwards. You can test this by starting a command prompt on your windows server, then running the following commands.
    • nslookup yourvcenter.host.name
    • nslookup yourvcenter.ip.address

    The first result should return the Ip address of the vCenter, which should match the second command you'll run. This in turn should return the hostname that you entered in the first command.

After this has been completed, we can start the actual upgrade. First, attach the iso image of the vCenter installer to your Ravello windows vCenter VM. Start the autorun.exe located on the DVD and start the installation:

vCenter installer

Next, accept the terms of the license agreement and continue:

vCenter installer

Provide your administrator credentials for vCenter single-signon:

image15

Accepting the default ports is recommended, so for this setup we'll do that. If you change any default ports, take note of the non-custom ports in case you'll need these later.

image02

Again, accepting the default install locations is recommended. if you wish to change these locations note them down.

image01

Confirm that you have backed up your vCenter and select upgrade, and you are on your way. The upgrade will take anywhere between 10 and 45 minutes depending on a variety of factors, so this is the time to get some coffee and watch the progress bar.

After the upgrade is complete, you should be presented with the following screen:

image09

Launch the web client (or if you have already configured ports to be forwarded in Ravello connect from your own computer) and try to log in with your administrator@vsphere.local credentials.

If the vSphere web client doesn't load, ensure that you have the desktop experience role installed, which is required to load all vSphere web client components. This can be done through the "Add roles and features" wizard included in windows, or easier, through the following powershell command:

Install-WindowsFeature Desktop-Experience

After logging in, you will be presented with your new vCenter 6 web client. If you open the help -> About VMware vSphere in the top right, you should see the new version number.

image00

And that's it for the Windows vCenter upgrade! Afterwards, you can proceed with the "Upgrading ESXi" if you wish to upgrade your ESXi hosts to version 6 as well.

Upgrading VMware vCenter Server Appliance

Upgrading the vCenter Server appliance, while not much more complicated follows a slightly different procedure from the Windows vCenter Server. Instead of performing an in-place upgrade a new appliance will be deployed, configured and started, after which the old appliance will be disabled and IP addresses will be swapped. Since we currently cannot install the VCSA 6 appliance directly on Ravello, we'll need to run this nested on ESXi.

First off, we'll need to validate the requirements before upgrading the VCSA:

  • Verify that the hostname of your existing 5.5 vCenter server is resolvable, both forwards and backwards. You can test this by starting a command prompt on your windows server, then running the following commands.
    • nslookup yourvcenter.host.name
    • nslookup yourvcenter.ip.address

    The first result should return the Ip address of the vCenter, which should match the second command you'll run. This in turn should return the hostname that you entered in the first command.

  • Ensure you have an ESXi host running on Ravello which is configured to run virtual machines and matches the hardware requirement mentioned above. In addition, this ESXi host needs to have a port group which is in the same network as the current VCSA.
  • Ensure you have a windows machine running in Ravello to perform the upgrade. This can be a temporary virtual machine, but currently a windows machine is required for the VCSA upgrade procedure.

Connect the vCenter 6 ISO to your windows machine and log in to the desktop, preferably through RDP.

Ensure that the desktop experience is installed if you are running a server OS through either the "Add roles and features" wizard included in windows, or easier, through the following powershell command:

Install-WindowsFeature Desktop-Experience

Open the VCSA directory on the DVD, then run the VMware-Clientintegrationplugin.msi installer.

image03

Follow the installation and reboot your machine if required, then open the root directory on the DVD.

Open the vcsa-setup.html file, which should start your browser and launch the install page. If you get any popups regarding allowing the VMware Client integration plugin, click accept.

image12

Click upgrade. Then, select "continue upgrade".

image07

Accept the license agreement, then enter the details of your ESXi host you will be deploying to. This should be the ESXi host you have already deployed in Ravello, but can be one already managed by your current VCSA.

image06

Accept the certificate when given the warning. Then, enter the name of your virtual appliance. This name should match the name of your existing vCenter appliance.

image08

Then, configure your source vCenter, being the vCenter that you are upgrading from.

image04

Enter your old VCSA hostname, password, SSO port and the hostname, username and password for the ESXi host your current VCSA is running on.

Select your appliance size and datastore. For the appliance size there are very little reasons to use anything but tiny when running in a lab environment.

As the last step, configure the temporary network.

image14

The temporary network is the port group on your new ESXi host which should be reachable from your other VCSA appliance. For the Network Address, the subnet mask and the gateway, keep in mind that these are not the IP addresses of your new vCenter, but a temporary IP address that will be used while the new VSCA is migrating data from the old VCSA. As such, it should not be an existing IP address. In addition, ensure DNS servers are entered and you can resolve the hostname of your old vCenter and ESXi host on these DNS servers, otherwise your installation will fail.

Review the settings, click complete and wait until the installation is done. This can take anywhere between 15 minutes up to 90 minutes. Keep in mind that your browser might not always refresh, but if you wish to follow the status you can always open the console of your new VCSA VM.

After the upgrade is complete, you should be presented with the following screen:

image05

Launch the web client (or if you have already configured ports to be forwarded in Ravello connect from your own computer) and try to log in with your administrator@vsphere.local credentials.

If the vSphere web client doesn't load, ensure that you have the desktop experience role installed, whcih is required to load all vSphere web client components. This can be done through the "Add roles and features" wizard included in windows, or easier, through the following powershell command:

Install-WindowsFeature Desktop-Experience

After logging in, you will be presented with your new vCenter 6 web client. If you open the help -> About VMware vSphere in the top right, you should see the new version number.

image00

And that's it for the Windows vCenter upgrade! Afterwards, you can proceed with the "Upgrading ESXi" if you wish to upgrade your ESXi hosts to version 6 as well.

The post How to setup your ESXi lab for upgrading from VMware vSphere 5.5 to 6.0 appeared first on The Ravello Blog.

Beyond Mininet: Use Ravello to test layer two OpenDaylight services in the cloud

$
0
0

solers-logo

Author:
John Sobanski
John Sobanski (Sr. Systems Architect) has been with Solers, Inc. for over ten years. John enjoys architecture, business development and machine learning. He has been an early advocate of the OpenDaylight platform and Ravello to both Public and Private customers.

OpenDaylight allows network engineers to control switches with high level intelligence and abstracted services. Before Ravello, your engineers needed to deploy physical switches or use Mininet in order to integrate and test OpenDaylight. Neither AWS, Google Cloud, nor Azure provide native access to layer two (Ethernet, VLAN, LLDP, etc.) in the cloud. Ravello, however, provides a simple method to access Layer 2 (L2) services in the cloud. This lab will show you or your engineers how to integrate and test OpenDaylight in the cloud, using full Virtual Machines (VM) instead of Mininet containers.

In this blog post you will learn how to:

  • Connect virtual machines to a dedicated virtual switch VM in the cloud with Ravello
  • Deploy and configure OpenDaylight
  • Use a REST API to configure your network switch
  • Easily steer flows through a firewall on ingress, but bypass on egress using OpenDaylight

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

Scenario

You have a product distribution system where egress throughput greatly exceeds ingress throughput. For security reasons, you perform Deep Packet Inspection (DPI) on flows between external (EXT) hosts and your Demilitarized Zone (DMZ) proxies.

image02

To ensure internetwork communications pass through the DPI, you implement a DPI "router on a stick" where a switch "bent pipes" the traffic at L2.

image01

The egress traffic will increase past the capacity of the DPI appliance.

You realize that there are cheaper methods of securing your egress flows than upgrading to a bigger DPI appliance.

With egress flows you want to ensure that return/ACK traffic does not include exploits and that egress flows do not facilitate zombies or “phone home” exploits.

  • Some ideas:
    • Ensure only approved ports:
      • Access Control List (ACL)
      • Iptables
      • Host firewalls
    • Mitigate against malicious code over approved ports:
      • HIDS on Servers
      • Uni-directional bulk data push with Error Detection and Correction over one way fiber
      • TLS with X.509 certificates

You would like to have DPI inspection on ingress flows, but not egress, since the other security measures will cover the egress flows.

  • One approach is to add "don't scan egress flows" logic to your DPI appliance, but that wastes capacity/resources and could saturate the backplane
  • An approach with legacy Network protocols is very difficult to implement, and results in asymmetric routes (i.e., will break things)
  • Using OpenDaylight, we have a simple solution that only requires matches/actions on six (6) flows

The goal:

  • When EXT initiates, pass through DPI
  • When DMZ initiates:
    • Bypass DPI on PUT (egress)
    • Scan on GET (ingress)

image04

Here is the logic for our OpenFlow rule set:

  1. ACL only allows permitted flows
  2. For ingress (EXT -> DMZ) flows, allow normal path to virus scan via gateway
  3. For egress (DMZ -> EXT) PUT flows, intercept packet
    1. Change destination MAC from gateway to EXT
    2. Change destination Port from gateway to EXT
    3. Decrement TTL by one
  4. For egress (DMZ -> EXT) GET flows (treat as ingress)
    1. DMZ uses dummy IP for EXT server
    2. Switch intercepts packet
    3. Switch changes source IP to dummy DMZ address
    4. Switch changes destination IP to correct EXT IP
    5. Packet continues on its way to gateway
    6. Reverse logic for return traffic

Lab Setup

This setup goes into the details of our test bed architecture. You can either create the architecture from scratch or use the blueprint Solers provides in the Ravello library.

Architecture

Our test uses the following architecture, and Ravello allows us to access layer two services in a cloud environment:

image03

Deploy four Linux virtual machines with Open vSwitch version 2.3.1 or greater.

image06

You can leave the management ports for all VMs with the default (AutoMac/ VirtIO/ DHCP/ Public IP) settings.

Be sure to enable SSH as a service for all four VMs.

Your central "s3" VM will contain the virtual switch and controller, so open up ports 8181 (ODL) and 8080 (Web).

image05

Each of the arrows in our Architecture diagram represents a physical link, or wire.

We simulate these physical wires in the Ravello layer as a network underlay.

While we configure this Ravello layer with IP, the Ravello layer presents these networks as physical links to our Virtual Machines:

image08

Some troubleshooting hints:

  • Ensure all ports are trunk ports (it is okay to keep the Management ports as Access 1)
  • You will be tempted to make the underlay links /30, since they are point to point. Ensure, however, that you make these /24s, as in the diagram above

We do not show the management ports (eth0) in the diagram above, since they are out-of-band.

Be sure to include the MAC addresses above, since we will use these values to trigger OpenDaylight services.

Configure your canvas to match the same Layer 3 and Layer 2 topology above. As an example, you would set the following Network configurations for the "ext" VM above:

  • Name: eth1
  • MAC: 72:57:E7:E1:B4:5F
  • Device: e1000 (default)
  • Static IP: 172.16.103.2
  • Netmask: 255.255.255.0
  • Gateway:
  • DNS:
  • External Access: Inbound (OFF), Outbound (ON), Public IP (Uncheck "even without external services")
  • Advanced: Mode (Trunk), VLAN Tags ()

Repeat the appropriate configurations for all four Virtual Machines. Your network will look like the following diagram:

image07

Once you finish configuring your Ravello layer, you can SSH into the virtual machines. Note, at this virtual machine layer you will configure different IP addresses for the Virtual Machine NICs (but the MAC addresses will match).

EXT Server

The EXT server simulates an un-trusted client and server.
Edit the NIC:

 $ sudo vim /etc/network/interfaces.d/eth1.cfg
  auto eth1
  iface eth1 inet static
  address 10.10.1.102
  netmask 255.255.255.0
  post-up route add -net 10.10.2.0 netmask 255.255.255.0 gw 10.10.1.1
  post-up route add -net 6.6.6.0 netmask 255.255.255.0 gw 10.10.1.1

You will need to restart the network service for the change to take effect.

$ sudo service networking restart

Then upload server.py and create a file named "test.txt".

Finally, issue the following command to pre-populate the arp table:

 $ sudo arp -s 10.10.1.1 5A:F6:C6:6A:DB:05

DMZ Server

Run the following shell command:

 $ sudo vim /etc/network/interfaces.d/eth1.cfg
  auto eth1
  iface eth1 inet static
  address 10.10.2.101
  netmask 255.255.255.0
  post-up route add -net 10.10.1.0 netmask 255.255.255.0 gw 10.10.2.1
  post-up route add -net 5.5.5.0 netmask 255.255.255.0 gw 10.10.2.1
 $ sudo service networking restart
 $ sudo arp -s 10.10.2.1 FE:C3:2D:75:C2:26

In addition, upload server.py and create a file named "test.txt".

Firewall

You need to turn the "firewall" into a router to pass traffic between the two NICs and make the change permanent:

$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo vim /etc/rc.local
     sysctl -w net.ipv4.ip_forward=1

$ sudo vim /etc/network/interfaces.d/eth1.cfg
  auto eth1
  iface eth1 inet static
  address 10.10.1.1
  netmask 255.255.255.0


$ sudo vim /etc/network/interfaces.d/eth1.cfg
  auto eth2
  iface eth2 inet static
  address 10.10.2.1
  netmask 255.255.255.0
$ sudo service networking restart
$ sudo arp -s 10.10.1.102 72:57:E7:E1:B4:5F
$ sudo arp -s 10.10.2.101 BA:74:4C:7A:93:50

L2 Switch

First ensure that your server brought up all interfaces. If not, bring them up manually:

$ sudo ifconfig eth1 up
$ sudo ifconfig eth2 up
$ sudo ifconfig eth3 up
$ sudo ifconfig eth4 up

Then install OVS:

$ sudo apt-get install openvswitch-switch
$ sudo vim /etc/rc.local
 ifconfig eth1 up
 ifconfig eth2 up
 ifconfig eth3 up
 ifconfig eth4 up
 exit 0

$ sudo ovs-vsctl add-br br0
$ sudo ovs-vsctl add-port br0 eth1
$ sudo ovs-vsctl add-port br0 eth2
$ sudo ovs-vsctl add-port br0 eth3
$ sudo ovs-vsctl add-port br0 eth4
$ sudo ovs-vsctl set bridge br0 protocols=OpenFlow13

At this point, you should be able to ping from DMZ to EXT and vice versa. If this is not the case, then follow these troubleshooting hints:

  • Pre-populate the arp cache
  • Run route commands to ensure proper routes
  • Ensure all ports at the Ravello layer are trunk
  • Ensure all point to point links at the Ravello layer use a /24 and not /30
  • Ensure that the VM Mac Addresses match up with the Ravello layer MAC addresses
  • Ensure that NIC's eth1, eth2, eth3 and eth4 on SW3 do not have IP addresses and that the OVS switch ports match up with the Linux Kernel switch ports
  • To do this, run $ sudo ovs-ofctl -O OpenFlow13 show br0

Do not proceed until you can ping full mesh across DMZ, EXT, and the FW virtual machines (excluding management ports).

Install OpenDaylight

OpenDaylight allows you to control switches with high level intelligence and abstracted services.
First, if you do not already have Java installed, you need to install Java 7(+):

 $ sudo apt-get install openjdk-7-jdk
 $ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/java-7-openjdk-amd64/bin/java 1
 $ sudo update-alternatives --config java

Then add the following line to the end of your ~/.bashrc file:

 export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 # This matches sudo update-alternatives --config java

Then download, unzip and run OpenDaylight:

$ wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distribution-karaf/0.3.2-Lithium-SR2/distribution-karaf-0.3.2-Lithium-SR2.zip
$ sudo apt-get install unzip
$ unzip distribution-karaf-0.3.2-Lithium-SR2.zip
$ /home/ubuntu/distribution-karaf-0.3.2-Lithium-SR2/bin/karaf clean

image11

This will take several minutes to start. Once you get the Karaf prompt, only add the following module:

 opendaylight-user@root>feature:install odl-l2switch-switch-ui

Installing the odl-l2switch-switch-ui module may also take several minutes.

You can check to see if OpenDaylight started by running:

 $ sudo netstat -an | grep 6633

Finally, upload and unzip the REST API scripts.

Connect your OpenVswitch to OpenDaylight

Open a new shell to SW3. If you kill the Karaf prompt, it closes OpenDaylight.

Then, connect to the local Controller:

$ sudo ovs-vsctl set-controller br0 tcp:0.0.0.0:6633
$ sudo ovs-vsctl set controller br0 connection-mode=out-of-band
$ sudo ovs-vsctl list controller

When you list the controller, you will want to see:

 connection_mode : out-of-band
 is_connected : true
 target: "tcp:0.0.0.0:6633"

Ping around your network. It will take some time for the OpenDaylight controller to "learn" your network. The virtual switch off-loads all of the intelligence to the controller.

We recommend first pinging "in network" (i.e., have DMZ and EXT ping their local gateways), and then ping between networks.

Now go to the DLUX GUI and login with admin/admin.

http://.220.71.123:8181/index.html#/login

You will see your devices in the OpenDaylight GUI:

image09

You can also dump the flows of the local switch to show that OpenDaylight "learned" the Layer 2 topology:

$ sudo ovs-ofctl -O OpenFlow13 dump-flows br0
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x2a00000000000000, duration=385.025s, table=0, n_packets=729, n_bytes=71328, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=ba:74:4c:7a:93:50,dl_dst=fe:c3:2d:75:c2:26 actions=output:2
 cookie=0x2a00000000000003, duration=28.425s, table=0, n_packets=2915, n_bytes=285404, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=72:57:e7:e1:b4:5f,dl_dst=5a:f6:c6:6a:db:05 actions=output:1
 cookie=0x2a00000000000002, duration=28.438s, table=0, n_packets=1695, n_bytes=166034, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=5a:f6:c6:6a:db:05,dl_dst=72:57:e7:e1:b4:5f actions=output:3
 cookie=0x2a00000000000001, duration=385.030s, table=0, n_packets=1947, n_bytes=190654, idle_timeout=1800, hard_timeout=3600, priority=10,dl_src=fe:c3:2d:75:c2:26,dl_dst=ba:74:4c:7a:93:50 actions=output:4
 cookie=0x2b00000000000000, duration=412.169s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x2b00000000000003, duration=408.298s, table=0, n_packets=914437, n_bytes=89614674, priority=2,in_port=3 actions=output:2,output:1,output:4,CONTROLLER:65535
 cookie=0x2b00000000000001, duration=408.298s, table=0, n_packets=960900, n_bytes=94168048, priority=2,in_port=1 actions=output:2,output:4,output:3,CONTROLLER:65535
 cookie=0x2b00000000000002, duration=408.298s, table=0, n_packets=906546, n_bytes=88841280, priority=2,in_port=4 actions=output:2,output:1,output:3,CONTROLLER:65535
 cookie=0x2b00000000000000, duration=408.298s, table=0, n_packets=905487, n_bytes=88737612, priority=2,in_port=2 actions=output:1,output:4,output:3,CONTROLLER:65535
 cookie=0x2b00000000000001, duration=412.168s, table=0, n_packets=7, n_bytes=1400, priority=100,dl_type=0x88cc actions=CONTROLLER:65535

Lab Execution

To observe the OpenDaylight triggered "egress bypass" service follow these steps:

  1. Observe baseline operations
    • Push a file from the DMZ server to the EXT server
    • Observe that traffic passes through the firewall
  2. Configure our switch with OpenDaylight
    • Use the REST API to inject the egress bypass rules into our switch
  3. Observe egress bypass
    • Push a file from the DMZ server to the EXT server once more
    • Now observe that traffic does not pass through the firewall
  4. Observe ingress scanning
    • Trigger the DMZ server to pull a file from the EXT server
    • Since this flow is ingress, we will observe the traffic pass through the firewall

Observe Baseline Operations

Open separate SSH terminals for your external (EXT) server, DMZ server, and the firewall. Start a web server on your EXT server with the following command, which starts a Python web server that accommodates GET and PUT:

 ubuntu@ext:~$ sudo python ./server.py 80

Snoop the traffic on your firewall (FW) with the following command:

 ubuntu@fw:~$ clear; sudo tcpdump -i eth2 port 80

Now PUSH a file from DMZ to EXT:

 ubuntu@dmz:~$ curl http://10.10.1.102 --upload-file test.txt

We will see success on the DMZ shell, via the following message:

ubuntu@ext:~$ sudo python ./server.py 80


Starting a server on port 80
----- SOMETHING WAS PUT!! ------
User-Agent: curl/7.35.0
Host: 10.10.1.102
Accept: */*
Content-Length: 5
Expect: 100-continue
10.10.2.101 - - [04/Dec/2015 15:49:29] "PUT /test.txt HTTP/1.1" 200 -
Test

Our PUSH from DMZ to EXT took a path through the firewall, so we see a packet dump on the snoop shell:

image10

Configure Switch with OpenDaylight

If you haven't already, start and connect to OpenDaylight. Refer to the Lab Setup section above for details. Once it started, use the REST API to discover the ID of your virtual switch. In any browser, go to the following address:

http://:8080/restconf/operational/opendaylight-inventory:nodes/

You should see just one node, your local OVS switch. Copy the ID of the node. For example, we list our ID below (NOTE: DO NOT USE THIS ID, YOURS WILL BE DIFFERENT).

image12

Our switch uses ID 49213347348856. Use this ID in the put_flows.sh script, in order to inject the flows into the switch with the REST API. Alternatively, you can manually install the flows using POSTMAN. From the shell of SW3, run the following command:

ubuntu@sw3:~/demo_fw_flows_ravello$ ./put_flows.sh 49213347348856
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/404 HTTP/1.1
< HTTP/1.1 200 OK
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/505 HTTP/1.1
< HTTP/1.1 200 OK
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/606 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/707 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/808 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> PUT /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/909 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
ubuntu@sw3:~/demo_fw_flows_ravello$

If you do not see an "OK" for every flow then run the script again. You can verify that OpenDaylight populated the switch with the following command:

ubuntu@sw3:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows br0
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=120.455s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=2,nw_src=10.10.1.102,nw_dst=10.10.2.101,tp_src=80 actions=set_field:5.5.5.5->ip_src,set_field:ba:74:4c:7a:93:50->eth_dst,output:4
 cookie=0x0, duration=120.392s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=3,nw_src=10.10.1.102,nw_dst=6.6.6.6,tp_src=80 actions=set_field:10.10.2.101->ip_dst,set_field:5a:f6:c6:6a:db:05->eth_dst,output:1
 cookie=0x0, duration=120.616s, table=0, n_packets=0, n_bytes=0, priority=300,tcp,in_port=3,nw_src=10.10.1.102,nw_dst=10.10.2.101,tp_src=80 actions=set_field:ba:74:4c:7a:93:50->eth_dst,output:4
 cookie=0x2b00000000000000, duration=594.845s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x2b00000000000003, duration=591.004s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=3 actions=output:2,output:1,output:4,CONTROLLER:65535
 cookie=0x2b00000000000001, duration=591.006s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=1 actions=output:2,output:4,output:3,CONTROLLER:65535
 cookie=0x2b00000000000002, duration=591.004s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=4 actions=output:2,output:1,output:3,CONTROLLER:65535
 cookie=0x2b00000000000000, duration=591.006s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=2 actions=output:1,output:4,output:3,CONTROLLER:65535
 cookie=0x0, duration=120.580s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=1,nw_src=10.10.2.101,nw_dst=10.10.1.102,tp_dst=80 actions=set_field:6.6.6.6->ip_src,set_field:72:57:e7:e1:b4:5f->eth_dst,output:3
 cookie=0x0, duration=120.513s, table=0, n_packets=0, n_bytes=0, priority=200,tcp,in_port=4,nw_src=10.10.2.101,nw_dst=5.5.5.5,tp_dst=80 actions=set_field:10.10.1.102->ip_dst,set_field:fe:c3:2d:75:c2:26->eth_dst,output:2
 cookie=0x0, duration=120.616s, table=0, n_packets=0, n_bytes=0, priority=300,tcp,in_port=4,nw_src=10.10.2.101,nw_dst=10.10.1.102,tp_dst=80 actions=set_field:72:57:e7:e1:b4:5f->eth_dst,output:3
 cookie=0x2b00000000000000, duration=594.845s, table=0, n_packets=20, n_bytes=4000, priority=100,dl_type=0x88cc actions=CONTROLLER:65535
ubuntu@sw3:~$

In addition, you can use the REST API with a browser to see the flows. Be sure to substitute your ID in the URL:

http://:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0

image13

Observe egress bypass

At this point, you should still have your PYTHON server running on EXT and a snoop running on FW. If not, go to baseline operations above to set these up. Now, run the PUSH command from the DMZ server and observe the action:

 ubuntu@dmz:~$ curl http://10.10.1.102 --upload-file test.txt

Again, we see "SOMETHING WAS PUT" on our EXT server...

image14

...but this time we do not see traffic on the firewall!

image15

Now, let's do a DMZ GET to EXT. In this case, we treat the flow as ingress, even though the DMZ initiates.

We use a dummy IP to trigger a flow match. The egress port of the switch will NAT it back to the real destination IP.

We see instant feedback on the DMZ Console:
image16

Go to the EXT server and you will see notice of the GET (Note the dummy IP for our server).
image17

Finally, go to the FW snoop shell and you will see this GET went through the firewall:

image18

Before you end the lab, remove the flows:

ubuntu@sw3:~/demo_fw_flows_ravello$ ./remove_flows.sh 49213347348856
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/909 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/808 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/707 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/606 HTTP/1.1
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/505 HTTP/1.1
< HTTP/1.1 200 OK
> DELETE /restconf/config/opendaylight-inventory:nodes/node/openflow:49213347348856/table/0/flow/404 HTTP/1.1
< HTTP/1.1 200 OK
ubuntu@sw3:~/demo_fw_flows_ravello$

For more fun with OpenDaylight, see Solers' presentation at OpenDaylight. You can find the PowerPoint here or the video here.

The post Beyond Mininet: Use Ravello to test layer two OpenDaylight services in the cloud appeared first on The Ravello Blog.

Ravello blogging contest winners – fighter pilot for a day

VyOS on AWS & Google cloud with Layer 2 networking

$
0
0

tumblr_inline_mx8tqxLPVu1qf2nrz

Networking enthusiasts rejoice – we are proud to share that VyOS is now available on Ravello Repo. Using Ravello's nested virtualization and networking overlay technology, it is now possible to run VyOS on AWS & Google cloud with Layer 2 networking.

Networking experts use VyOS – a community fork of Vyatta (now a Brocade product), Linux based network operating system – for software based network routing, firewall and VPN. VyOS supports many features such as scriptable CLI, stateful configuration, image based upgrade, support for sophisticated routing protocols. All these and more, make VyOS a popular choice as a networking and security virtual appliance in the open-source community, and now it possible to run it on Ravello with one click.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

To run VyOS, please open a Ravello trial account, and add the VyOS VM to your library, and start building sophisticated network deployments with data-center like networking.

 

The post VyOS on AWS & Google cloud with Layer 2 networking appeared first on The Ravello Blog.

Installing and configuring vRealize Automation 7 lab environment on AWS and Google Cloud

$
0
0

ITQ

Only released a few days ago, vRealize Automation 7 is one of the biggest redesigns of any VMware product. Including a new blueprint canvas, infrastructure-as-code, built-in application deployment and vRealize orchestrator workflows, full integration of VMware NSX, and many more improvements.

Obviously, with a product this new, you’ll want to get familiar with it before even considering deployment in production. Especially considering the full redesign of the blueprint system and features such as vRealize Orchestrator integration, the upgrade path from vRealize Automation 6 to 7 can be quite complicated.

For this reason, we’ll show you how to setup a lab for vRealize Automation 7 using public cloud capacity, without needing to acquire hardware for a testing platform or having to worry about touching your production environment.

If you are a VMWare reseller and/or system integrator, you can build vRA 7 labs like these on public cloud and use them for your sales demo, proof of concepts(POCs) and training environments. You pay hourly based on the size of your lab and only when you are using it.

Preparing your environment

First off we’ll have to start with the deployment of the required services on Ravello. Since most people interested in vRealize Automation will usually use the integration with VMware vSphere, we’ll deploy that. For testing and deployment the IaaS feature, we’ll need the following servers:

  • 1 vCenter server running Windows Server 2008R2 or later (we’ll use 2012R2 in this example) with 2vCPU, 8GB memory and 50GB of storage. We’ll use this server to install VMware vCenter 6u1 on this.
  • 1 ESXi server running ESXi with 2 vCPU, 8GB memory and 10GB of storage.

For the exact deployment of VMware vSphere on Ravello, please see VMware ESXi Smart Labs on AWS or Google cloud for detailed instructions. This step is completely optional though and only required for the IaaS part of vRealize automation.

In addition, we’ll need some servers for the supporting infrastructure:

  • 1 Active Directory Domain controller running Windows 2008R2 or later. . In a test environment, this is usually combined with one of the other roles such as the SQL server or the vCenter server.
  • 1 Microsoft SQL server running Windows 2008R2 or later.

To deploy the vRealize Automation roles on Ravello, we’ll need the machines that will host the vRealize infrastructure:

  • 1 vRealize Automation IaaS server running Windows Server 2008R2 or later. In our case, this server will host all vRealize Automation Roles (IaaS, DEM, and Proxy agent) in addition to Microsoft SQL server. This host should be configured with 2 vCPU, 8GB of memory and 60GB of storage.
  • 1 vRealize Automation virtual appliance. This appliance will be deployed through the Ravello VM import tool. This machine should be configured with 2vCPU, 12GB of memory and 60GB of storage.

After setting up the initial environment, your blueprint will look comparable to this:

image04

Deployment of vRealize automation

Before we start off with the deployment, there are some requirements you must take into consideration. On all machines running in this lab we’ll need to ensure that they are synchronizing their time to the same NTP source. This could be your domain controller, or it could be an external NTP server, but ultimately the time should be within 300 seconds of eachother. In addition, proper DNS configuration is required. Now, the ravello DNS should be perfect for this, but you’ll need to ensure that hostnames in ravello match the hostnames within the guest operating systems.

We’ll start off with the preparation of the infrastructure machines. Log in to the database host and install Microsoft SQl sever if you haven’t done so already. After installing Microsoft SQL, enable TCP/IP as a protocol for the database server and enable Microsoft Distributed Transaction Coordinator Service. For more information how to do this, see the section “To enable MSDTC on each web server on Windows server 2008” on this page.

Next, we move on to the IaaS server. Again, ensure that MSDTC is installed and enabled. If you cloned this machine from the same template as the SQL server, ensure that MSDTC is uninstalled and reinstalled. To to this, type “msdtc –uninstall” followed by “msdtc –install” on an elevated command prompt. In addition, ensure the secondary logon service is running.

Import the vRealize Automation virtual appliance. Download the vRealize Automation 7 appliance – and if you want to, the orchestrator appliance – from this page. Extract the OVF from the OVA files (for more information on how to do this, see this page) and upload it to ravello using the image import tool. After uploading the OVF, deploy the virtual machine to Ravello and open the console. After a short while, the system will require you to change your password. Since we cannot configure the OVF properties during the deployment, we’ll need to set those manually.

Press enter on the error message during the first boot. This is expected and you can change this after the VM has booted. During the deployment, the appliance will ask you to set a password. Change it to anything you like and Log in to the console with the the username root and the password you just set.. Then, run /opt/vmware/share/vami/vami_config_net to configure the IP addressing, DNS and routing. In addition, if you want to get rid of the warning during boot follow the instructions to modify the boot.compliance init file discussed on the post How to run Vcenter 5.5 appliance on AWS using Ravello.

After the network and password has been configured, configure the virtual machine in the ravello canvas to publish port 80, 443 and 5480 on this appliance. This allows you to log in to the management interface on the external address instead of having to use a virtual machine within your lab environment.

After deploying the vRealize Automation appliance, log in to the IaaS server and download the vRealize agent from https://ip-of-appliance:5480/installer.

image05

Run the msi installer and follow the steps. During the management site service setup, enter the following details:

  • vRA appliance address: https://fqdn.of.vra.appliance:5480/
  • root username: root
  • password: the password just set on the console
  • management site service certificate SHA1 fingerprint: click “Load” and check the confirmation checkbox.

image09

Next, enter the credential of a Windows account that you’d like to use as your vRealize service account. For a test environment, it’s not required to create a dedicated service account but in production it would be highly recomended.

image07

When you log in to the management interface of the appliance, the installation wizard should appear. Click “next” and accept the EULA. Select “Minimal deployment” and “Install Infrastructure as a service” and select “Next”.

Change the time settings to use NTP and configure the same server as you’ve set up on the rest of your environment.

image11

In the prerequisite screen, click “Run” to verify that your windows server are correctly configured. The prerequisite step can take 5 minutes, so have patience. When this step finishes, prerequisites will either fail or succeed. When you encounter a failure, don’t worry, since vRealize Automation 7 can resolve most failures automatically by clicking “Fix” to automatically resolve the issue and reboot Windows. If the issue cannot be resolved automatically, click “Show details” to get more information on the issue.

image02

When all issues are resolved, click “Next”. This will allow you to continue on to the configuration.

In the next step, you have two choices:

  • Configure vRealize Automation with the internal hostname you are using within the lab. This allows you to reuse your lab as a blueprint and deploy it multiple time, but requires you to create an entry in the hosts file of your machine to be able to resolve the internal address.
  • Configure vRealize automation with the eternal hostname of your Ravello application. This allows you to access the vRealize Automation deployment directly from the outside but prevents you from duplicating the setup as a blueprint.

In either case, select “Enter host” since Ravello currently does not support reverse DNS and enter the hostname of your choice.

image03

Next, in “Single sign-on”, configure a password for the default administrator account.

image13

In the “IaaS” host step, enter the internal FQDN of your IaaS windows host. In addition, enter a windows username (including your domain component if running active directory), and the password. Lastly, enter a database encryption password which is used to encrypt certain sensitive data in the SQL database.

image06

Continue with the SQL database. Enter the server’s FQDN and a database name. Select “Create new database” and select “Default settings”. If you’ve selected “Windows authentication” ensure that the account you are running the vRA management agent under has SA permissions on the SQL server.

image00

As you can see, in our case we have chosen for Windows authentication, which will use the account you've selected to run the IaaS components under. If it's a domain account, ensure this account has SA permissions to the server. If you can't provide SA permissions, create the database in advance, select "Use existing empty database" and ensure the account has db_owner permissions to this database.

Next we need to set up the DEM workers. Select the IaaS host (you should only have one), enter an instance name (anything will do, this is purely administrative) and enter a windows username and password, which can be the same as your other services are running under.

image01

Under the agents section, we need to add one agent. If you wish to run multiple agents (for example, if you have multiple vCenters or would like to test vRA to deploy against AWS or Openstack as well) you’ll need to add additional agents. Select your IaaS host and enter an agent name and endpoint name. It’s recommended to use a clear name for the endpoint field since you’ll need to reuse it inside vRA when configuring your endpoint. Usually I use the FQDN of the vCenter server that I’m using the agent for. Select “vSphere” as the agent type and – again – enter your windows account.

image10

A certificate will need to be generated for the vRealize appliance next. Here, you can either generate a self-signed certificate or – if you have one – import an SSL certificate. For now we’ll use a self-signed certificate since this is a demo environment. Enter the certificate fields with your personal information, click “Save Generated Certificate” and click continue when it’s finished. Repeat these steps for both the Web service and the Manager service.

image12

Now you’re finished, and a final validation can be ran. This can take 10 to 15 minutes, so prepare to take a break, get some coffee and wait for the validation to complete.

image15

The next step – Create snapshots – can be skipped since this is a demo environment.

The installation will start running now. This can take around 30 minutes, so prepare to take a bit longer break. If anything fails, you can either retry the failed components or all IaaS components. After half an hour, all status checks should be green and you can almost use your new vRealize Automation 7 setup. Just one last thing to do, is to enter your license key , choose whether or not to join up in the Customer Experience Improvement Program and enter a password for the "configurationadmin" user which will allow you to import initial content into your vRealize automation 7 setup.

image14

Once you’ve configured all the steps, deployment and configuration of vRealize automation should take about 30-45 minutes. After waiting this time, you should have a basic vRealize environment set up for you to test and configure.

image08

Logging in can be done by navigating to the hostname of your vRealize Automation appliance. If you're familiar with vRealize Automation 5 or 6, you'll notice on thing straightaway - no more SSO appliances or vSphere SSO to configure! VMware Identity Management is a new feature released with vRA 7, which means that management of accounts and credentials is now built-in to the appliance. This also means that if you are using multiple tenants, they can log in to the same vRealize Automation URL, you won't have to worry about needing to provide separate tenant URL's to different customers anymore.

image16

Once you're logged in with the default administrator account you created earlier, you can start to add active directory, give your existing admin user permissions, or create a new customer tenant that you'll create to consume your vSphere environment and provide your new unified blueprints. Congratulations on being the first one in your company to run a vRealize automation 7 lab environment, and enjoy exploring all the new features!

The post Installing and configuring vRealize Automation 7 lab environment on AWS and Google Cloud appeared first on The Ravello Blog.


How to run Cumulus switch on AWS & Google cloud

$
0
0

cumulus

Cumulus Networks provides a Linux based OS for data-center switches, and has seen a great adoption in the last year. Enterprises love Cumulus since existing Linux management, automation and monitoring tools work seamlessly with Cumulus switches – dramatically simplifying data-center operations. Network Architects want to try out Cumulus, but before rolling out new technology on their networks – they want to build out a leaf-spine topology to realistic enterprise scale and test things out.

Get it on Repo

REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

Ravello's Network Smart Labs presents a great platform where all this and more is possible. With underlying technologies – nested virtualization and networking overlay – that power Ravello, it is possible to create full featured deployments with data-center networking (Layer 2) on AWS. Whether one wants to build Cumulus leaf-spine deployment from scratch or use an existing deployment as a starting point, Ravello presents an easy to use platform to do so.

VX_KVM_topo_noOOB

 

Infact, there is a pre-built Cumulus switch leaf-spine topology on Ravello Repo that one can run with a single click. Just open a Ravello trial account, and add the blueprint to your Ravello library. If you want to build and run Cumulus switch deployments from scratch, Christian Elsen has written a very nice article on running Cumulus switches on AWS and Google cloud.

 

The post How to run Cumulus switch on AWS & Google cloud appeared first on The Ravello Blog.

Five most popular penetration testing tools

$
0
0

PenTest_Image

Ethical hackers are embracing public cloud for penetration testing. Using Ravello on AWS and Google cloud, enterprises are creating high-fidelity replicas of their production environments – and using it for penetration testing to find and fix vulnerabilities in their network, web and applications before a hacker does. This article looks at five most popular tools used by ethical hackers for penetration testing –

1. Kali Linux – Kali is one of the most popular suite of open-source penetration testing tools out there. It is essentially a Debian Linux based distro with 300+ pre-installed security & forensic tools all ready to go. The most frequently used tools are -

    1. Burp Suite - for web applications pentesting. Burp Suite can be used for initial mapping and analysis of an application's attack surface, finding and exploiting security vulnerabilities. It contains a proxy, spider, scanner, intruder, repeater, and sequencer tool.
    2. Wireshark - network protocol analyzer that needs no introduction
    3. Hydra - tool for online brute-forcing of passwords
    4. Maltego - a tool for intelligence gathering
    5. Aircrack-ng - wireless cracking tool
    6. John - offline password cracking tool
    7. Owasp-zap - for finding vulnerabilities in web applications. Owasp-zap contains a web application security scanner with an intercepting proxy, automated scanner, passive scanner, brute force scanner, fuzzer, port scanner etc.
    8. Nmap - for network scanning. Nmap is a security scanner and contains features for probing computer networks, including host discovery and service and operating system detection – generally mapping the network’s attack surface. Nmap features are extensible by scripts that provide more advanced service detection and vulnerability detection.
    9. Sqlmap - for exploiting sql injection vulnerabilities

One can download Kali Linux from Kali website and install the ISO on an empty VM on Ravello with a couple of clicks.

maxresdefault

2. Metasploit Community – Metasploit framework enables one to develop and exploit code against remote target machines. Metasploit has a large programmer fan base that adds custom modules, test tools that can test for weaknesses in operating systems and applications. While open-source Metasploit framework is built into the Kali Linux the more feature rich versions – Metasploit community edition and Metasploit Pro are available from Rapid7 and highly recommended. Metasploit Pro comes with additional functionality such as Smart Exploitation (that automatically selects exploits suitable for discovered target), VPN pivoting (that allows one to run any network based tools through a compromised host), dynamic payloads to evade anti-virus / anti-malware detection, collaboration framework that helps sharing information as a part of the red-team effectively.  

Metasploit-Community-Hosts2

3. CORE Impact – Core Impact is equally appealing to newbies as it is to experts.  It provides a penetration testing framework that includes discovery tools, exploit code to exercise remote & local vulnerabilities, and remote agents for exploring and exploiting a network. CORE Impact works by injecting shell-code into the vulnerable process and installing remove agent in the memory that can be controlled by the attacker. Local exploit can then be used to elevate privileges and this exploited host can them be used to look for other hosts to attack in a similar manner. CORE Impact’s easy to use interface (just point and attack!), flexible agents, regular updates to exploits and built-in automation makes it a popular choice for enterprises. But good things don’t come cheap – CORE Impact comes with a very expensive price tag.

Core-Impact-screenshot-750

4. Canvas –  Canvas expects users to have a considerable knowledge of pentesting, exploits, system insecurity and focuses on exploitation aspects of penetration testing. It doesn’t perform any discovery, but allows one to manually add hosts to interface and initiate a port scan & OS detection. This discovered information becomes a part of host’s ‘knowledge’ and ethical hacker needs to select the appropriate exploits based on this knowledge. If the exploit is successful a new node signaling an agent populates on the node-tree on canvas. Nodes can be chained together through hosts (much like CORE Impact) so that attacks can percolate deeper into the networks. Although Canvas is a commercial tool (just like CORE Impact), it is roughly one-tenth the price of CORE Impact.

Screen Shot 2016-01-18 at 9.01.07 AM

5. Nessus – Nessus is a vulnerability scanner and very popular amongst security professionals. It comes with a huge library of vulnerabilities & tests to identify them. Nessus relies on response from target hosts to identify the holes, and the ethical hacker may use an exploitation tool (e.g. Metasploit)  in conjunction to verify that reported holes are indeed exploitable.

screenshot-from-2015-08-08-02_57_12

So which is the best penetration testing tool out there? There is no one correct answer. It depends on the target, scope and ethical hacker’s proficiency with pentesting.

Interested in checking the effectiveness of your favorite pentesting tool? Just open a Ravello trial account, upload your VMs to recreate a high fidelity replica of environment you want to pentest, and point your favourite pentest tool at it. Since Ravello runs on public cloud with access to data-center-like networking, a growing number of enterprises are using it to create realistic pentesting environment to scale.

The post Five most popular penetration testing tools appeared first on The Ravello Blog.

Understanding vCloud Air pricing: How virtual private cloud on-demand compares to AWS

$
0
0

VMware vCloud Air

One of the biggest and clearest advantages public cloud computing has over traditional data centers is the cost - with the cloud pricing model Capex becomes Opex, and with a quick use of all the provided calculators - you know exactly what you’re going to pay. No negotiation, “Plug and play”. With it’s own vCloud Air pricing calculator - does VMware-gone-on-demand also fit the description?

The short answer is yes. Virtual Private Cloud OnDemand is pay per usage. Much as the standard - CPU and RAM are metered per minute when VMs are powered on, and storage is metered per minute since allocation to the VM. Public IPs are metered by the minute and since allocation to a gateway. Support - some version of percentage of compute bill.

vCloud Air and AWs pricing example

In our narrower scope - I priced out an application both on the AWS and on the vCloud Air calculators. For this scenario, let’s consider an email security appliance. The appliance setup involves:

  1. Email security appliance: 2 vCPU, 4GB RAM, 50GB storage
  2. Email server (exchange): 4 vCPU, 16GB RAM, 200GB storage
  3. Database server: 4 vCPU, 16GB RAM, 500GB storage
  4. Active Directory: 2 vCPU, 4GB RAM, 50GB storage
  5. Windows client: 2 vCPU, 4GB RAM, 50GB storage

Summary of results

  Resources / configurations Estimated monthly cost
vCloud Air 3 X (2,4,50), 1 X (4,16, 200), 1X (4, 16, 500) $820
AWS 3 X t2.medium, 2 X m4.xlarge $524

 

For both options I went with an application that runs 24X7 for a month. On vCloud Air this required three configurations, for the three types of VMs. Using standard storage, adding 1 public IP for the entire application and running in US Virginia 1 region, the total price is $820/month, including ~$53 for support. On AWS this meant provisioning 3 t2.medium instances (Linux) and 2 m4.xlarge instances (Linux), and adding the necessary storage (850GB), running in the Virginia region, we get to an estimate of $524/month.

So that I don’t give anyone the wrong idea: This is a good place to say that there are significant differences between Virtual Private Cloud OnDemand and your public cloud options, both with advantages and disadvantages. AWS, for instance, provides much more extensive additional services, and a broader geographic spread. L2 networking, however, isn’t accessible there, while with vCloud Air - it is supported. Here’s a more in depth analysis of AWS vs vCloud Air. It’s not just a matter of price, and even there (as you will see in the following paragraph) - things can change when optimizing for your own use case.

Buying options

Back to the longer answer to the question. Where it might start to get confusing is when you consider your buying options, because you can actually pay for on-demand in advance with a subscription purchasing program. It might sound a little less on-demand. But actually, VMware SPPs can be seen in this context strictly as a different way to buy - get initial credits, and then, depending on your specific selected program - use them or roll them over. Furthermore, while it might seem confusing in the beginning, the concept is not at all foreign to the public cloud: AWS provides the reserved instances option, Google Cloud has sustained use discounts. The concept actually fits well with VMware, since it is much more like the type of service that VMware typically sells - programs for different periods of time provide for different discount options - and utilizes existing and familiar buying options from VMware. Choosing the SPP option does affect other aspects of your purchase process, like the way in which you can add configurations in different regions, but I’ll leave it out of the scope.

You’ll see that much like AWS and vCloud air have pretty different offerings, so does Ravello, in our case - enabling full support of running VMware VMs complete with L2 networking on AWS or Google Cloud using nested virtualization. As we dig more into vCloud Air pricing, and other cloud pricing options, please join the conversation. Take a look at how we do things here, and how Ravello pricing works, and give your feedback in the comments.

The post Understanding vCloud Air pricing: How virtual private cloud on-demand compares to AWS appeared first on The Ravello Blog.

NFV Orchestration: Overcome multi-tenancy challenges (part 1 of 4 post series)

$
0
0

NFV-orchestration

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

This article details NFV orchestration using public cloud NFVI as a 4 part series. This post looks into challenges traditional networks have with multi-tenancy and workload mobility. In the next, we'll show how Network Function Virtualization (NFV) fits in and can increase service velocity.

Leap-frogging the network to the advances in data-center

Over the last 10 years there has been an increasing proliferation of virtualization, primarily seen in the areas of compute and storage. However, in a data centre there is an additional functional block called the network. Networks have been lagging behind with slow innovation and has not virtualized to the same extent as storage and compute. We need to view and manage the network as one large fabric system in a centralized fashion. To increase agility, the network must become programmatic and not viewed as individual nodes managed box by box. Central view and managerial points for the network increase network efficiency.

Networks should be consumed by the consumers of the infrastructure in a self service manner. For example, an application developer can deploy a stack without having to wait for the network team to provision rules or by interacting with multiple technical teams for deployment. The network must become seamless and automated. The ability to rollout network services, applications in VM or containers without network intervention is key to increase time to market.

There has been a transition from hardware centric data centres to agile virtual cloud data centres. One important aspect of the cloud data centre is that infrastructure is being consumed as a service. When infrastructure is consumed as a service, then the consumers of the infrastructure become the tenants of the cloud infrastructure. Multiple tenants accessing cloud's resources make the cloud data centre multi tenant in nature. In a public cloud, this would be resources made available to multiple customers. In a private cloud, this is resources made available to different departments or organisation units. Multi-tenancy and the ability for many customers to share resources puts pressure on traditional networking technologies.

Issues resulting from network multi-tenancy

Securing multi tenant cloud environments drives the need for tenant isolation. Tenant A should not communicate to tenant B, without explicit permit statements. A tenant should consist of an independent island of resources, protected and isolated from other islands. Every tenant should have an independent view of an isolated portion of the network and peak loads should not affect neighboring tenants in separate virtual networks. Noisy neighbors are prevented by policing and shaping at a VM or tenant level. Both shaping and policing limit the output rate but both with functional differences.

Security is a major concern in multi tenancy environments and breaches in one tenant should not affect others. Beachheading, the process of an attacker jumping from one compromised location to the other should not be permitted. And if a tenant becomes compromised, traffic pattern and analytics should be provided, enabling the administrator to then block the irregular traffic patterns caused by the attacker.

Increasing number and type of applications are moving to the cloud but unfortunately traditional network infrastructures are not near agile enough to support them. Traditional networks are not designed for the cloud or to connect virtual endpoints within a cloud environment. Originally, they were invented to connect physical end points together. We are beginning to see the introduction of software based network in the form of overlays used in combination with traditional physical networks, underlays.

Legacy VLANs are used to segment the network, which has proved to be inefficient to segment a dynamic and elastic multi tenant network. VLANs are very tedious and intervention was needed on every switch in the data centre as tenant state was held on individual nodes in the fabric. VLANs also restrict the number of tenants due to the number of bits available, limiting you to 4096 VLANs. This will soon run out when deploying multi tenant tier application stacks. VLAN designs also require MAC visibility in the core and when a switch runs out of MAC tables size it will start to flood. Unnecessary flooding wastes network bandwidth and hampers network performance. Also, layer 2 domains convert to a single broadcast and failure domain, causing havoc in the event of a broadcast storm. Instead of all these kludges we need to run networks over IP. Similar to how Skype runs over the Internet. Scalable networks are built over IP and overlays can be used to provide Layer 2 connectivity.

The post NFV Orchestration: Overcome multi-tenancy challenges (part 1 of 4 post series) appeared first on The Ravello Blog.

Nested virtualization: How to run nested KVM on AWS or Google Cloud

$
0
0

KVM on AWS

Running nested KVM on public clouds such as AWS and Google has traditionally been a challenge because hypervisors like KVM hypervisors are designed to run on physical x86 hardware and rely on virtualization extensions offered by modern CPUs (Intel VT and AMD SVM) to virtualize the Intel architecture. It is now possible with Ravello’s nested virtualization technology.

Ravello’s nested virtualization technology is called HVX - it runs on the public cloud and implements virtualization hardware extensions (Intel VT and AMD V) functionality in software. Now HVX exposes a true x86 platform type to the "VM" running on top of the public cloud. This allows enterprises to run hypervisors like KVM on AWS. From an implementation perspective, we have adapted our binary translation so that it recognizes the double-nesting, and effectively removes one layer of nesting and runs the guest directly on top of HVX. As a result, the performance overhead is relatively low. In addition, we have also implemented nested pages support inside HVX which will make running a hypervisor on top of HVX even more efficient.

Currently, Red Hat uses Ravello to run their global training for OpenStack with nested KVM on AWS - in various regions all over the world.

Here is a step by step guide on how to install and run RHEV with nested KVM on Ravello. If you all you need is a host with KVM, you can use one of the vanilla VMs provided in the Ravello library, enable the nested flag and go ahead. .

You can try this on Ravello for free using our 14 day trial. And be sure to check out the ready made blueprints for OpenStack and other linux deployments available on Ravello Repo.

The post Nested virtualization: How to run nested KVM on AWS or Google Cloud appeared first on The Ravello Blog.

Viewing all 333 articles
Browse latest View live