Quantcast
Channel: The Ravello Blog
Viewing all 333 articles
Browse latest View live

How to setup VXLAN using Arista vEOS to seamlessly connect data-centers

$
0
0

Arista

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

The following post explains key drivers for overlay virtual networking and popular overlay technology called Virtual Extensible LAN (VXLAN). It walks through how to set up a working VXLAN design based on Arista vEOS on Ravello’s Networking Smart Lab. Additional command snippets from a multicast VXLAN using Cisco CSR1000V setup are also present.

Evolving Data Centre

The evolution of the data centre started with cloud computing when Amazon introduced the first commercial product to provide a “cloud anytime and anywhere” service offering. The result has driven a major paradigm shift in networking, which forces us to rethink data centre architectures and how they service applications. Designs goals must be in line with new scalability and multi-tenancy requirements that existing technologies cannot meet.

Overlay-based architectures provide some protection against switch table explosion. They offer a level of indirection that enables switch tables sizes not to increase as the number of hosts supported increases. This combats some of the scalability concerns. Putting design complexities aside, an overlay is a tunnel between two endpoints, enabling frames to be exchanged between those endpoints. As the diagram below states, it’s a network that is built on top of another network. The overlay is a logical Layer 2 network and the underlay could be an IP or Layer 2 Ethernet fabric.

Overlay Network

The ability to decouple virtual from physical drives new scalability requirements that push data centres to the edge. For example, the quantity of virtual machines deployed on single physical servers has increased. Each virtual machine has one or more virtual network card (vNIC) with unique MAC address. Traditional switched environments consist of single server connected end hosts, viewed as a single device. The corresponding upstream switch processes a low number of MAC address. Now, with the advent of server virtualization, ToR switch accommodates multiple hosts on the same port. As a result, MAC table size must increase by an order of magnitude to accommodate the increase scalability requirement driven by server virtualization. Some vendors have low MAC address tables support, which may result in MAC address table overflow causing unnecessary flooding and operational problems.

Over the course of the last 10 years, the application has changed. Customers require complex application tiers in their public or private IaaS clouds deployments. Application stacks usually contain a number of tiers that require firewalling and/or load balancing services at edges and within the application/database tier. Additionally, in order to support this infrastructure you need Layer 3 or Layer 2 segments between tiers. Layer 2 supports non-routable or keepalive packets.

Application Tiers

The infrastructure must support multi-tenancy for numerous application stacks in the same cloud environment. Each application stack must maintain independence from one another. Application developers continually request similar application connectivity models such as same IP address and security models for applications, both for on-premise and cloud services. Applications that are “cloud centric” and born for the cloud are easy to support from a network perspective. On the other hand, optimized applications for “cloud-ready” status may require some network tweaking and overlay models to support coexistence. Ideally, application teams want to move the exact same model to cloud environments while keeping every application as an independent tenant. You may think that 4000 VLAN is enough but once you start deploying each application as a segment and each application has a numerous segments, the 4000 VLAN limit is soon reached.

Customers want the ability to run any VM on any server. This type of unlimited and distributed workload placement results in large virtual segments. Live VM mobility also requires Layer 2 connectivity between the virtual machines. All this coupled with quick on-demand provision is changing the provisioning paradigm and technologies choices we employ to design data centres.

The Solution – Overlay Networking

We need a technology that decouples physical transport from the virtual network. Transports should run IP only and all complexities are performed at the edges. Keep complexity to the edge of the network and let the core do what it should do – forward packets as fast as possible, without making too many decisions. The idea of having a simple core and smart edges carrying out intelligence (encapsulation) allows you to build a scalable architecture. Overlay virtual networking supports this concept and all virtual machine traffic generated between hosts is encapsulated and becomes an IP application. This is similar to how Skype (voice) uses IP to work on the Internet. The result of this logical connection is that the edge of the data centre has moved and potentially no longer exist in the actual physical network infrastructure. In a hypervisor environment, one could now say the logical overlay is decoupled from the physical infrastructure.

Physical Infrastructure

Virtual Extensible LAN (VXLAN) is an LAN extension over a Layer 3 network. It relays Layer 2 traffic over different IP subnets. It addresses the current scalability concerns with VLANs and can be used to stretch Layer 2 segments over Layer 3 core supporting VM mobility and stretched clusters. It may also be used as a Data Centre Interconnect (DCI) technology, allowing customers to have Multi-Chassis Link Aggregation Groups (MLAG) configurations between two geographically dispersed sites.

VXLAN identifies individual Layer 2 domains by a 24-bit virtual network identifier (VNI). The following configuration snippet from Cisco CSR1000V displays VNI 4096 mapped to multicast group 225.1.1.1. In the CSR1000V setup, PIM spare-mode (Protocol Independent Multicast) is enabled in the core.

VNI

The VNI identifies the particular VXLAN segment and is typically determined based on the IEEE 802.1Q VLAN tag of the frame received. The 24-bit segment-ID enables up to 16 million VLANs in your network. It employs MAC over IP/UDP overlay scheme that allows unmodified Layer 2 Ethernet frames to be encapsulated in IP/UDP datagrams relayed transparently over the IP network. The main components in VXLAN architecture consist of virtual tunnel end-points (VTEPs) and virtual tunnel interfaces (VTI). The VTEP perform the encapsulation and de-encapsulation of the Layer 2 traffic. Each VTEP is identified by an IP address (assigned by the VTI), deployed as the tunnel endpoint.

Previous VXLAN implementation consisted of virtual Layer 2 segments with flooding via IP multicast in the transport network. It relied on traditional Layer 2 flooding/learning behaviour and did not have an explicit control plane. Broadcast/multicast and unknown unicast frames were flooding similar to how they flooded in a physical Ethernet network – the difference being they were flooded with IP multicast. The configuration snippet is from the CSR1000v setup and displays multicast group 225.1.1.1 signalled with flag BCx (VXLAN group).

Tunnel

Multicast VXLAN scales much better than VLANs, but you are still limited by the number of multicast entries. Also, multicast in the core is undesirable especially from a network manageability point of view. It’s another feature that has to be managed, maintained and troubleshooted. A more scalable solution is to use unicast VXLAN. There are methods to introduce control planes to VXLAN, such as LISP and BGP-EVPN and these will be discussed in later posts.

Arista Ravello Lab Setup

I set up my Arista vEOS lab environment on Ravello’s Network Smart Lab. It was easy to build the infrastructure using Ravello’s application canvas. One can either import the VMs with the Ravello import utility or use Ravello Repo to copy existing ‘blueprints’. I was able to source the Arista VMs from the Ravello Repo.

Once you have the VM setup, move to the “Application” tab to create a new application. The application tab is where you build your deployment before you publish your virtual network deployment to either Google or Amazon Cloud. Ravello gives one the opportunity to configure some basic settings from the right-hand sections. I jumped straight to the network section and added the network interface settings.

Network Interface

The settings you enter in the canvas sets up the basic network connectivity. For example, if you configure an NIC with address 10.1.1.1./24, Ravello cloud will automatically create the vSwitch. If you require inbound SSH access, add this external service from the services section. Ravello will automatically give you a public IP address and DNS name, allowing the use of your local SSH client.

canvas

Once the basic topology is set-up, publish to the cloud and start configuring advanced features via the CLI.

VXLAN Setup

The Arista vEOS design consists of a 4 node set-up. The vEOS is the EOS in a VM. It’s free to download and can be found here. The two Core’s, R1 and R2 are BGP peers and formulate BGP peerings with each other, simulating the IP Core. There are no VLANs spanning the core and it’s a simple IP backbone, forwarding IP packets. Host1 and Host2 are IP endpoints and connect to access ports on R1 (VLAN 10) and R2 (VLAN 11) respectively. They are in the same subnet, separated by layer 3. The hosts do not perform any routing.

VXLAN Setup

Below is a VXLAN configuration snippet for R1. The majority of VXLAN configuration is done under the Vxlan interface settings.

image04

The next snippet displays the VXLAN interface for R2. The loopbacks are used for VXLAN VTEP endpoints and you can see that R2 VTEP loopback address is 172.16.0.2. The VNI is set to 10010 used identify the VXLAN segments.

image01

The MAC address table for R2 below shows the MAC address for H2 only. The core supports Layer 3 only and the only routes learnt are the VTEP endpoints.

image05

Configuration for Arista vEOS are found at this GitHub Account.

Conclusion

VXLAN is a widely used data-center interconnect (DCI) technology, and can be implemented using Arista vEOS or Cisco CSR1000v to seamlessly connect data-centers. Ravello’s Networking Smart Labs provides an easy way to model and test a VXLAN before it is rolled out into production infrastructure.

The post How to setup VXLAN using Arista vEOS to seamlessly connect data-centers appeared first on The Ravello Blog.


vCloud Air OnDemand: Virtual Private Cloud – Review

$
0
0

VMware

Author:
Jeffrey Kusters
Jeffrey is an IT professional with almost 20 years of experience. He holds several leading certifications including VMware VCP Cloud, VCP5, VCAP-DCA5, Prince2 Foundation and TOGAF L1. Recent years, he has been primarily focused on designing virtual infrastructures for modern, software defined datacenters using VMware and has a a deep understanding of public cloud offerings like VMware vCloud Air, Amazon AWS and Microsoft Azure.

I did a Disaster Recovery to the cloud proof of concept / demo recently using Ravello Systems and VMware vCloud Air public cloud services. This blogpost explains how I set up the demo.

First, I deployed a basic 3-node ESXi 6.0 VSAN cluster in Ravello. I tried to keep my setup as clean and simple as possible. My main purpose with this demo was to replicate VMs to vCloud Air and perform a disaster recovery. Nothing more. So I kept everything very plain and very simple: one /24 subnet for my entire “on premises” Ravello datacenter, a single standard vSwitch with a single uplink and one VMkernel port for just about everything. I know, not very scalable and resilient but remember it was just a demo. I deployed a Windows 2012 R2 based Domain Controller directly on Ravello and I added another Windows based machine which I used as a jumphost and to install vCenter 6.0 for Window on.

image1

Next, I needed something to set up a VPN to vCloud Air with. Ravello provides a nice step-by-step guide on how to deploy and set up a pfSense virtual firewall appliance, so I went with that. Of course I could also have set up vShield Networking and Security or even NSX-v, but the resource overhead and added complexity outweighed the benefits. Deploying and setting up pfSense was very easy but getting the IPSec tunnel to vCloud Air up and running was a pain. Making sure that both sides of the VPN are using the exact same settings can be a challenge when you are using different products. In the end I got the IPSec tunnel to pass traffic successfully using this VMware KB article.

Because it was just a demo setup (and because I couldn’t get the tunnel up and running at first) I decided to allow all traffic through the firewalls. I also port forwarded RDP traffic on TCP3389 from the WAN interface of the pfSense firewall appliance to my jumphost so I could RDP to it directly over the internet.

Now it was time to set up the vCloud Air Disaster Recovery side of the demo. But first let me explain why we first needed to set up a vCloud Air OnDemand Virtual Private Cloud instance: vCloud Air DR only provides so-called warm standby resources. This means that it is not possible to spin up an active VM inside a DR cloud instance. The only way to get VMs to run in a DR instance is by replicating them into the cloud and performing a DR. Replicating domain controllers using vSphere Replication seriously breaks AD so I had to provide supporting infrastructure services such as AD, DNS and NTP myself … somewhere. A vCloud Air Virtual Private Cloud OnDemand is an ideal place to run these services. Basically this cloud instance is just another datacenter, so I set up my basic networking, my DNS and my AD Sites and Services. I hooked up the Edge Gateway - that VMware provides in every cloud instance - to my Ravello site using IPSec VPN and finally, I opened my firewall ports. My AD was replicating and I had successfully added a vCloud Air VPC OnDemand instance to my infrastructure. Finbally I set up a VPN between the vCloud Air OnDemand instance and the vCloud Air DR instance so the recovered workloads could access the supporting infrastructure services after a disaster recovery. Setting up this VPN was really easy because both VPN endpoints are VMware provided Edge Gateways.

Next up: vSphere Replication. Setting this up was pretty easy. The appliance is supplied as an OVF. You just put in your networking details and off you go. Once the appliance is deployed I finalized the setup through the Virtual Appliance Management Interface (VAMI), which runs on port 5480 of the appliance. The most important step is registering the appliance with Single Sign-On. The vSphere Replication plugin is then automatically installed in the vSphere Web Client.

I deployed a single MicroCore Linux based VM. This is a pretty small Linux distribution which contains the open source VMware Tools. Before I could replicate it to vCloud Air DR I needed to register my cloud instance with vSphere Replication. VMware provides a vSphere Replication instance inside vCloud Air for you so you don’t have to configure or deploy anything there. Registering the vCloud Air DR instance as a target site is done using the vSphere Web Client.

image2

Once my vCloud Air DR instance was registered, I could initiate an outgoing replication for my MicroCore Linux VM from my Ravello based “datacenter” to vCloud Air. I can imagine that all these cloud instances and VPN tunnels can get a bit confusing so I drew up a small infrastructure drawing in Visio:

image3

Remember that the vCloud Air instances are provided from UK based datacenters and that the Ravello powered vSphere datacenter is running in an AWS datacenter in Virginia, USA. This virtual machine was actually replicating across the Atlantic.

That was it. Everything was set up now and I could perform a planned migration, a disaster recovery and a native failback using the vSphere Web Client or the vCloud Air tenant portal.

The post vCloud Air OnDemand: Virtual Private Cloud – Review appeared first on The Ravello Blog.

On-demand Cyber Ranges on AWS using Ravello- making cybersecurity development, testing and training affordable & accessible for enterprises

$
0
0

simspace-logo1

Today’s cyber threat landscape necessitates that your organization base its approach to security on the assumption that the adversary is already inside your network. So how do we prepare your organization to take back your network and to protect your data?

SimSpace is proud to introduce our Virtual Clone Network (VCN) technology that provides realistic environments, adversary attack campaigns, and training and assessment tools for your organization’s cybersecurity developmental, testing, and training requirements. With SimSpace’s VCN, you are no longer restricted to small networks or to virtual environments that are not representative of your specific network environment and typical traffic. SimSpace VCN is a first-of-its kind offering because it utilizes capacity from Amazon Web Services and Google Cloud to provide full featured pre-configured and tailorable Cyber Ranges that are deployed on-demand in fully isolated environments - made possible by Ravello Systems’ nested virtualization and software-defined networking technology.

A SimSpace VCN can span in size from tens of hosts to several hundred and, for urgent requirements, we have built several easily accessible models including enterprise environments, public utilities, financial institutions, or military networks. Depending on your circumstances, you can customize and extend the existing pre-defined networks or you can start from scratch and generate an entire network tailored to meet your specific organizational needs. We will give you the tools necessary to rapidly create, configure and validate your own customized virtual environment. The process to build and configure your VCN is fully automated; we just need to know your requirements.

Leveraging both the advantages of the cloud and Ravello's cutting edge HVX technology, you can spin up the environment of your choosing, for just the amount of time that you need it, and then suspend or delete it when finished. You no longer need a dedicated staff to build, operate, and maintain custom, in-house, and often separate, development, test and training environments. Instead, focus your staff and resources on what you need the most ... being prepared to be effective against the threat.

The technology used to build and run our Virtual Clone Networks was developed after a decade of investments by the U.S. Military to provide high-fidelity virtual environments for the DoD testing and training communities. Now SimSpace can offer the same technology that powers the government’s most sophisticated cyber ranges to your business at a more affordable and accessible manner.

Uses

So what can you do with a Virtual Clone Network? Some examples include:

  • Test and development environments to create the next generation cybersecurity solutions
  • Risk reduction for the introduction of new cybersecurity solutions into production environments
  • Hypotheses testing for real-time responses to cyber incidents
  • Disruptive-capable assessments that complement traditional pen-testing of production networks
  • Comparative analysis of existing or new cybersecurity solutions against competing alternatives
  • Virtual environment for pen-testing risk-reduction analyses
  • Assessment of the effectiveness of pen-testing derived cybersecurity solutions
  • Assessments of individual and team cybersecurity performance
  • Individual training for cybersecurity and pen-testing operators
  • Cybersecurity team training
  • Range-based Cyber exercises

Virtual Clone Network Capabilities

Predefined or tailored network environments

Your network can be chosen directly from a suite of predefined networks or can be tailored by extending or adjusting one of the predefined networks or you can start from scratch to build a custom network to meet your specific needs. The predefined networks range in scale from tens of nodes to hundreds of network machines. These pre-built networks are representative of a variety of organizations: enterprises, the defense industrial base, financial institutions, utilities or military networks. These virtual networks are all self-contained, that is, isolated from the Internet, in order to prevent any accidental spillage or inadvertent attacks on real-world sites. Our intent is to provide a safe environment where you can test and train without the unnecessary consequences. Despite the advantages of being isolated, effective testing and training still require a realistic Internet within our VCNs. To accomplish this, we re-host thousands of sampled web, email, and ftp sites. We also provide root and domain DNS servers and core BGP routing. Within the VCNs, just as a typical network, we run Virtual Routers, full Windows Domain Controllers, Exchange, IIS, DNS and File servers. Linux, Unix and other server and client operating systems are also included along with their popular services. As much as system administrators would like to think that their networks are perfectly constructed and aligned, the reality is that there are many misconfigurations, in addition, to legacy and unwanted traffic. So, we add that in as well. For each of the services, we also include real content in the sites and services so that our virtual users can interact with that content in a realistic manner (e.g. send/receive/open email attachments, click on embedded URLs, etc). We are also able to tailor and reproduce important features of many domain-specific or custom applications and services that are critical to your business area, so they too may be included to fully represent the defensive posture of your organization and challenge your defensive team. We are also able to provide a wide set of operating systems, services, data, and user accounts because we have developed the tools and processes to fully automate both the setup and configuration of those systems.

Realistic, host-based user activity

To create high-fidelity replicas of networks, we need more than just the hosts, servers, and infrastructure to match the architecture. To be truly realistic, we also need to recreate all the user activity, both productive and unproductive, that we see on a daily basis. Users today mix their personal and professional lives and vary in their level of productivity, focus, application usage, social networking and awareness of cybersecurity threats. To generate this level of realism, SimSpace provides the most advanced user-modeling and traffic-generation capability available to make the VCNs come alive. Each host on the network is controlled by a virtual user agent who logs in each morning and uses real applications like Internet Explorer, Firefox, MS Office and Windows Explorer to perform their daily activities. As every Netizen is like a snowflake, unique in their own way, our virtual user agents are programmed with their own individual characteristics. Each user has their own unique identity, accounts, social and professional networks, daily schedule, operating behavior and preference for which applications to use, when and how often. Just like in the real world, users interact with other users, compose emails, open, edit and send documents to co-workers and external collaborators to accomplish their daily tasks. These virtual users are goal-driven and reactive, which means they can respond to predefined instructions and sense their environment and any changes within it. Therefore, if a particular service or application becomes unresponsive, they can adjust their behaviors and application usage to complete the tasks. This rich and immersive environment generates the daily host and network activity that sophisticated attackers use to hide or obscure their presence. This typical “top cover” allows them to exploit user applications and operating systems (e.g. spear-phishing, drive-by-downloads) to gain a foothold in the network and operate covertly. The challenge for the defensive operators and their tools is to identify and stop attackers who are also operating alongside legitimate users. If successful, of course, your cybersecurity team will prevent the adversary from carrying out its goals and will minimize the disruption to your business operations.

Defensive tools and applications

Ravello's unique and powerful layer2 network and nesting technology allows us to integrate open-source and commercial defensive and offensive tools into a SimSpace VCN. Ravello is the only cloud provider in the industry with these robust and innovative networking technologies. SimSpace VCNs are preloaded with popular security solutions like pfSense, Security Onion, OpenVAS, Kali Linux and are configured according to industry best practices. Depending on your requirements, these typical cybersecurity tools can be replaced or combined with other more appropriate solutions. By loading your specific configuration files and rule sets, your VCN becomes more tailored to your environment and, in turn, enhances your training, testing, and assessment results.

Model sophisticated adversaries

SimSpace’s VCNs, regardless of whether they are predefined or tailored, come with some of the most advanced capabilities for simulating real users. But what about simulation of advanced adversaries? To simulate a real advanced threat, you need to simulate advanced tactics. And that starts with zero day emulation. In the Virtual Clone Network, every piece of software has built in memory corruption exploits, with both remote, client, and local exploit options. This offers the most advanced zero-day emulation threat capability against every host in your VCN, regardless of its patch level or operating system. Want to see how well your company responds to a zero day? SimSpace VCNs can put your team to the test!
SimSpace Breach is the most advanced penetration-testing tool in existence. With SimSpace Breach, you can enable your Red Teams to not only work more efficiently, but deliver a higher threat capability in a shorter amount of time than ever before. With the same number of red team operators, more threat engagements of higher caliber can be accomplished in a similar time-period. In addition, SimSpace Breach has instrumentation that work within the Virtual Clone Network to allow you to gain better insights on your tooling, people and process.

Assessment Tools

Now that we have provided you a realistic environment and the ability to recreate sophisticated adversaries, how will your cybersecurity team or the tools they rely upon perform? To answer these questions, we have developed a suite of assessment tools to help. Your VCN is a highly instrumented environment that can provide insights into the defensive effectiveness of your team as well as the impact to your organization’s cyber environment from an attack. Specifically, we can help you understand 1) what were the specific attacker actions and movements performed, 2) how many virtual users experienced service disruptions, 3) what was the response time for the defenders to identify the attacker, repel them from the network, and then, if required, restore business operations, and 4) what was the mission impact during the attack. For each testing or training objective, we are able to capture specific objective performance metrics and allow you to assess your team’s effectiveness and, over time, their rate of improvement.

Availability

Unveiling the new technology and announcing beta access today.

About SimSpace

SimSpace’s mission is to measurably improve, in a cost effective way, the cyber capabilities of your enterprise.

Who we are:

  • An innovative cybersecurity company leveraging decades of experience working for the U.S. Military and DoD Laboratories to provide next-generation cyber assessments, training, and testing.
  • SimSpace provides high-fidelity simulated network environments, or Virtual Clone Networks (VCN), for tailored, interactive, and scalable cyber events along with specialized software tools for activity replay, mission impact evaluation, and network monitoring.
  • SimSpace focuses on your organization’s entire cybersecurity capability — People, Process, and Technology — successfully integrating and validating testing, training, and assessments for individuals, small-team and large-force training exercises for 100+ operators.

The post On-demand Cyber Ranges on AWS using Ravello- making cybersecurity development, testing and training affordable & accessible for enterprises appeared first on The Ravello Blog.

Multi Node Openstack Kilo lab on AWS and Google Cloud with Externally Accessible Guest Workload – How to configure Openstack networking on Ravello Systems Part 1

$
0
0

OpenStack Cloud Software

Last week we went into how to prep an image for Ravello/AWS/Google/ESXi. This week we're going to leapfrog ahead a bit and talk about networking and OpenStack.

OpenStack is highly complicated for a number of reasons, chief amongst them is that what it seeks to do is replace a bunch of highly complex silos. Second, but not far behind, is that it does this via a collection of independently developed microservices.

OpenStack as an organization of projects has a consensus culture, not a strong central authority / command culture. Without a central authority laying down standards, everything is based on consensus, first within a project (IE: neutron) and then within the community of projects, with project trumping community The most frequent manifestation of this is inconsistencies in command and API syntax between projects, but you’ll also find instances where someone has snuck a change into a not-really related project because they couldn’t get it into the relevant project.

All that being said, for this week I've attempted to publish a one-click OpenStack blueprint in Ravello.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

With great hubris, I attempted to get a multi-node install setup where one could simply click "start instance" and have an externally accessible cirrOS. I couldn’t quite pull that off, but I got pretty close.

When you copy and go to launch the blueprint you’re going to get a blueprint error as follows:

image00

To resolve it, you can do one of two things. Assign an elastic IP address or a public IP address to the secondary IP on the neutron node. This is done by selecting network, scrolling down the first interface (eth0) and clicking advanced. You should see something like this.

image01

To resolve the error, you can either assign an elastic IP by clicking select or shift it to a public IP.

While we’re here, let’s also look at the security rule that delegates firewall rules to OpenStack:

image06

Note this is mapped to the static IP above and allows access to everything in.

You can now publish the application. It’s going to take a bit to spin up, but when it does so, you can view the public IP on the neutron node under the general tab in the dropdown for eth0/1:

image05

90% of the time, this works every time. If everything goes well, you’ll have a horizon dashboard up at the public IP for the controller node and will be able to navigate to it via https. Log in as “user” / “openstack”, go to instances, and launch the instance. When it boots, you will be able to reach the cirrOS image on the public or elastic IP you assigned to it earlier via ssh / icmp (user: “cirros”, password: “cubswin:)” ):

image04

This works because a floating IP, 192.168.0.33 has been assigned to the instance. This is in private IP space, but because a public or elastic IP has been associated with that static address on the Ravello side, it gets mapped end to end:

image07

Unfortunately, the other 10% of the time sometimes happens, it isn’t strictly deterministic. Horizon (the dashboard) sometimes just goes away for a bit (although it comes back). The instance also sometimes hangs on start (you’ll see it waiting on a virtual wire in the instance log) and requires a hard reboot through horizon. I’ve found deploying to Amazon more reliable than Google, but this is such an edge case that that is probably just feeling, not reality.

All of these little wrinkles, ironically, may make the lab the perfect intro to OpenStack. It works, but not quite perfectly, and sometimes needs a little intervention.

I’ll go more into general networking in a future post, including what the OpenStack network documentation images I have nicknamed the “arc of the covenant images” mean (somewhat):

image03

image02

Exciting huh?

The post Multi Node Openstack Kilo lab on AWS and Google Cloud with Externally Accessible Guest Workload – How to configure Openstack networking on Ravello Systems Part 1 appeared first on The Ravello Blog.

How Techclyde delivers VMWare training with ESXi virtual student labs running on AWS and Google Cloud with Ravello Systems

$
0
0

Techclyde

Author:
Raghunatha Chary Maringati, CEO, Techclyde Services Private Limited
Business leader with over 22 years of client success across diverse industries including Information Technology & Telecom. Interests include business strategy, sales & marketing, employee engagement, new product innovation, technology partnering and client relationship management.

Techclyde, incepted in the year 2015, is a cloud computing professional services provider that also has an academic wing offering various professional courses related to cloud computing.

Techclyde is focused on providing state of art training on the cutting edge technologies and empowers the IT professionals and individuals with the right skills to leverage their potential in today’s ever changing dynamic IT needs. We are a group of dynamic IT architects with combined experience of more than 50 years in IT datacenter and cloud computing technologies. We provide three types of services - cloud consulting, infrastructure consulting and operations support. We also offer training in Datacenter, Cloud computing, Virtualization, DevOps and various other niche technologies.

Hence it is very important for Techclyde to have quality cloud lab environment for the technical training of the students and corporate clients.
After experimenting with a few shared lab providers and experiencing real time performance and stability issues, we decided to go with Ravello. We could fulfill our most critical requirements of setting up cloud based labs by deploying complex multi-VM application environments for training.

Also, features like One demand multi-node Openstack labs for training, VMware ESXi lab environments in AWS on-demand demos, sales enablement applications in VMware or KVM helped us to expand our range of training programs and imparted hands on experience for the trainees.

The key benefits for us were:

  • By using the blueprint functionality we could deploy complex application in minutes, i.e. create application stack once and reuse the same as needed.
  • We could provide better user 24 X 7 experience to our students, since this cloud labs can be accessed from anywhere even after the trainings. Students could feel comfortable and practice after the class.
  • We could forecast and control our cloud consumption with zero CAPEX.
  • We removed dependency on physical failure or complete lab failure, since in Ravello we can have an infinite isolated lab setups, which we could kill and start new ones from the blueprint without the need to spend debugging them.

We were not limited by availability of physical hardware and could accommodate as many students/batches, by deploying student training labs on -demand.
We no longer require limited/restricted user terminals to access lab setup, instead, we allow users to connect directly to Techclyde customized client interface VM which is hosted in Ravello using public IP. With this, we are able to achieve completely customized labs based on our requirement including default runtime setup based on Audience / Training module.

Each lab / student (Dedicated) in Ravello consists of the following modules:

  1. VMware Management servers
    1. vCenter Server Appliance
    2. VMware vCenter Orchestrator Appliance
  2. Couple of ESXi 5.5 Hypervisors
  3. Storage Node for both ISCSI and NFS (Openfiler)
  4. Client Interface VM with external access

[caption id="attachment_5672" align="alignnone" width="823"]Techclyde virtualization ICM 5.5 Student LAB setup Techclyde virtualization ICM 5.5 Student LAB setup[/caption]

[caption id="attachment_5673" align="alignnone" width="911"]    Networking Canvas  from Ravello Networking Canvas from Ravello[/caption]

The post How Techclyde delivers VMWare training with ESXi virtual student labs running on AWS and Google Cloud with Ravello Systems appeared first on The Ravello Blog.

Provisioning and running on-demand ESXi labs on AWS and Google Cloud for automation testing – Managed Services Platform and delivery

$
0
0

Novosco

Author:
Myles Gray
Myles is Infrastructure Engineer for Novosco Ltd in the MSP division. Primarily focused on implementing IaaS projects and automation for both self-hosted and private customer clouds.

Company Profile

Novosco is a leading provider of Cloud Technologies, Managed Services and Consulting. We specialise in helping organisations utilise the unique aspects of emerging technologies to solve business challenges in new and dynamic ways. We operate under managed service or strategic partnership contracts with our major clients.

Our Use Case

Ravello's ESXi labs in particular is used by the Managed Service Platforms division.

In order to support our growth and deliver a consistent managed service quality across our client base we have standardized "checks" we run on environments at differing intervals.

This gives us a pro-active element and an ability to see problems before they emerge, or indeed catch them and detail for remediation.

There is only so much we can deliver reliably with manual effort, as such, to allow our MSP division to scale clients without needing to scale team size unnecessarily we turned, as any company would, to automation and scripting.

We have some very valuable and specialised checks that we have automated, that would either take an individual a considerable amount of time and effort to produce, be prone to error due to complex calculations needed or just not be feasible to monitor at the frequency we require.

So scripting was a must, what about testing?

That's where Ravello's labs come in (in particular ESXi virtualisation) this allowed us to, at scale, test scripts to a level that are just not feasible within a physical lab, or indeed with the same repeatability and consistency of environment.

image01

I'll take one instance as an example, I was asked to produce a script that would automate the testing of host level operations. Obviously at some point, you're going to have to modify the environment.

So, environment modification, that's completely automated... Sounds like a recipe for disaster, right?

Sure, but if you test it thoroughly with enough configs and hone your permissions down, you minimise risk, and that's what we use Ravello for:

Testing scripts that could cause collateral damage before putting them anywhere near production.

Ravello allows us to spin up and down environments that are similar, if not identical to customer's or test cases that we think may break our automation.

Their Blueprint feature makes it super-easy to spin up and destroy these to test whether a new feature in the code will break under certain environmental conditions.

We are building up a library of VM profiles (different ESXi builds or vCenters) and blueprints that simulate these environments and allow us to deploy for very reasonable cost and minimal effort the same conditions these scripts will see in the wild.

And the icing on the cake?

We don't break our own lab or customer infrastructure, win-win.

Repeatable, robust and shareable

Obviously when you create a lab of any kind it takes a time investment, so if you are able to have that environment and spin it up many times, for the same initial time investment isn't it a no-brainer?
Yep, and that's just what we were able to achieve with Ravello's blueprint feature, I created a single blueprint with a common environment (4x hosts, 1x vCenter and shared storage):

image00

Pretty standard, nothing extravagant, but something we see quite often. I was able to spin this up in a few hours (that includes time to create an ESXi template, a vCenter install and shared storage with a NexentaStor CE VM), that's not bad considering most of those components are now reusable in other blueprints.

Obviously i'm not the only one that works in the division, so this was shared amongst all the other engineers and now regardless of who it is, if we are collaborating on a development effort, we all now have consistent test conditions that were otherwise impossible to reproduce.

Have we found a new factor that may affect how the script works?

Run the blueprint, make the environmental change, spin it down, create a new blueprint and there we are - consistent environments for all members with this new variable. Perfect.

Summary

Overall we've been very happy with Ravello, it fits our need perfectly and I can see it (with the API support) becoming a CI tool for us to have tests run automatically on different environments on milestones, we have a way to go yet, but Ravello have been more than helpful and don't foresee this being a problem!

The post Provisioning and running on-demand ESXi labs on AWS and Google Cloud for automation testing – Managed Services Platform and delivery appeared first on The Ravello Blog.

Data-center like networking on AWS & Google Cloud

$
0
0

Ravello

Ravello’s software defined networking overlay enables some really cool functionality that allows network and security enthusiasts, ISVs, enterprises to build full featured labs with Layer 2 access on AWS & Google Cloud – capabilities otherwise unsupported on public cloud (due to heavy filtering of broadcast & multicast packets). This gives one access to the rich data-center-type-networking at the flexibility, geographical reach, scale and cost economics of the public cloud.

Thanks to an overwhelming adoption by the network community, we have put together a series of short videos that can help you get started with building your very own networking lab using Ravello:

  1. How to create a DHCP network [on Ravello]
  2. How to create a static IP network
  3. How to handle a mix of static & DHCP IPs
  4. How to add public IP to the VMs
  5. How to add elastic IP to the VMs
  6. Why & how to setup port-forwarding to access your VMs
  7. How to add additional NICs to VMs
  8. How to add multiple IPs to a NIC
  9. How to setup VLANs
  10. How to configure MAC address on NIC
  11. How to add external network device (e.g. router)
  12. How to do port mirroring

Network & security appliances from leading vendors run on Ravello’s Network & Security Smart Lab. Interested in learning how Ravello’s software defined networking works – watch this video.

[video url="https://www.youtube.com/watch?v=st3yMLpd_8Y"]

If you need any help, just let us know – we are standing by.

The post Data-center like networking on AWS & Google Cloud appeared first on The Ravello Blog.

How to run leading network & security appliances on AWS & Google with L2 Networking

$
0
0

Advanced Enterprise Networking In AWS EC2 - A Hands On Guide

Ravello’s software defined networking overlay makes it possible to create full-featured network & security labs on public cloud. With a clean Layer 2 networking access, enterprises, ISVs, their resellers, have adopted Ravello for a variety of use-cases – network modeling, development-testing, training, sales demos, PoCs, cyber ranges, security sandbox environments to name a few.

A networking or security lab is only as sophisticated as its deployment setup and the appliances running in it. To help our network & security enthusiasts create a lab of their dreams, we have put together a series of guides on how to run popular network & security appliances on Ravello. Go ahead – read these “how-tos”, and build the lab that you always dreamt of – all with a couple of clicks!

  1. Cisco CSR 1000v
  2. Juniper vSRX
  3. Palo Alto Networks VM Series
  4. Arista vEOS
  5. Fortinet FortiGate
  6. Splunk
  7. Citrix NetScaler
  8. Barracuda Firewall
  9. Cumulus VX
  10. F5 Big IP
  11. Infoblox

To try Ravello's network or security lab sign up for our 14 days free trial. A credit card is not required.

If you have any questions, let us know – we are standing by to help.

The post How to run leading network & security appliances on AWS & Google with L2 Networking appeared first on The Ravello Blog.


Building Openstack lab with packstack on AWS and Google Cloud

$
0
0

OpenStack Cloud Software

In this blog, we will describe step by step instructions to build a multi node Openstack lab with packstack, that you can run on AWS and Google Cloud. You can build and run these labs on Ravello Systems. Ravello Systems platform makes AWS and Google look like real hardware. Ravello technology consists of a high performance nested virtualization engine and an overlay network technology that enables developers, ISVs and enterprises deploying OpenStack in their data centers to run development, testing, staging and upgrade testing environments in AWS or Google cloud with KVM hardware acceleration.

Start by constructing a new application in Ravello Systems:

image22

Copy this centOS 7.1.1503 image from Ravello repo to your VM Library. it will show up under shared in your VM library.
Copy to your personal library.

Drag 4 of our centos 7.1.1503 images, onto the canvas.

image18

image20

You will get design errors as they have numerous conflicts. Start by selecting one and configuring it as the controller. First set the name and hostname.

image15

Then set CPU and RAM:

image00

Finally, configure the network, starting with eth0:

image07

eth1:

image24

and eth2:

image13

Add the relevant ports to the services section of the controller node:image02

image12

With the controller done, the number of errors should drop:

image23

We now move on to the neutron node:

image10

Set the CPU/Memory:

image06

Setup eth0:

image04

and eth1:

image05

And the additional IP for eth0(under advanced, additional IP addresses):

image17

Assign an elastic IP to it:

image11

Now modify the compute nodes:

image03

image21

Set the CPU / memory:

image16

Assign an extra disk (for ceph):

image14

image09

Fix up the network starting with eth0:

image19

and the eth1:

image01

Repeat for your compute-2 node.

What you will end up with is a network diagram resembling the following:

image08

The basics are done.
10.0.0.0/8 is the openstack network
172.16.0.0/16 is the ceph network
192.168.0.0/24 is the public network

Console into controller, run ssh-keygen
copy /root/.ssh/id_rsa.pub somewhere

Set up networking:
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="Ethernet"
BOOTPROTO="static"
IPADDR="192.168.0.2"
NETMASK="255.255.255.0"
GATEWAY="192.168.0.1"
DNS1="192.168.0.1"

Upon doing a network restart you will now be able to ssh into the server.

/etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
PEERROUTES="no"

/etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE="eth2"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
PEERROUTES="no"

/etc/sysconfig/network
GATEWAY=“192.168.0.1”

/etc/init.d/network restart

add /root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys

On neutron:
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="Ethernet"
BOOTPROTO="static"
IPADDR="192.168.0.3”
NETMASK="255.255.255.0"
GATEWAY="192.168.0.1"
DNS1="192.168.0.1"

/etc/sysconfig/network
GATEWAY=“192.168.0.1”

/etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
PEERROUTES="no"

add controller:/root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys

On compute-1 and compute-2:
add controller:/root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys

append:
DHCP_HOSTNAME=“compute-1.localdomain” to /etc/sysconfig/network-scripts/ifcfg-eth1

edit /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
PEERROUTES="no"
DHCP_HOSTNAME=“compute-1.localdomain

/etc/sysconfig/network
HOSTNAME=“compute-1.localdomain”

Remove the hostname module from /etc/dloud/cloud.cfg under the:
cloud_init_modules section.

Hop back over to the controller:
scp -pr ~/.ssh compute-1:~/.
scp -pr ~/.ssh compute-2:~/.
scp -pr ~/.ssh neutron:~/.>

You can now proceed with a packstack or other install. We’ll go into a packstack / ceph install on this in the next entry.

The post Building Openstack lab with packstack on AWS and Google Cloud appeared first on The Ravello Blog.

How to build large scale Openstack environments in AWS and Google Cloud

$
0
0

OpenStack Cloud Software

In this blog, we will describe how to scale up the blueprint described in this entry.

The easiest way to scale is to up the amount of RAM and CPU reserved for each of the compute nodes when copying the blueprint, however there is a limit to how far this will take you, so this week we will be going over how to add additional compute nodes to the OpenStack blueprint.

Spawn the blueprint

Spawn the “multi-node OpenStack Kilo” one-click blueprint. Prior to doing so, make any modifications you need to.

Create a compute VM Image

Within your running application, navigate to canvas, you should see something like this:

image07

Select one of the compute nodes and add it to your library:

image12

Shut it down and name it something you will associate with compute, like say, ‘compute.’

image01

This will shutdown, snapshot, and copy the VM into your library. After a bit of a wait (the machine selected has to shutdown to be snapshotted and that then needs to be saved), you will be able to add the compute node to your application.

Add a new compute node to the application

In the canvas, click the plus sign and drag the brand new “compute” vm onto the canvas somewhere.

image00

Correct Design Errors

This will cause a design error(and one pending change) to come up(pardon my lack of alignment):

image06

The reason for this is because the DHCP ips(which we reserve) on eth0/1 as well as the hostname are all in use. Correct the hostname first:

image08

Then proceed to correct eth0 (start at .7, neutron has .6):

image04

and eth1:

image03

After saving these, the blueprint errors will go away.

Update the application

image10

Verify networking

You can now go over and checkout the network view, it should look something like this:

image05

Modify the Hosts to Remove Conflicts from the new Compute Node

All is not done, however. The new compute node is going to boot with the identity of the compute node you selected as the source of the image. To fix this, first you will need to log in to it, either via console, or by first sshing to the controller and then the host.

Reset the Hostname

Reset the hostname, first via hostnamectl set-hostname compute-3.localdomain, then by editing /etc/sysconfig/network in your favorite editor. Finally do a “systemctl disable ceph” to prevent ceph from starting, and reboot the machine.

Evaluate any conflict between Compute-1 and Compute-3

It is possible, but not likely, that if you did the above quick enough, compute-1 came back into the mix and assumed control over it’s OSD / identity. You can check this by navigating to /etc/ceph on the controller and running a ceph osd tree. If compute-3 shows up as having osd.0 you will need to restart compute-1.

Prepare the new Host

You may want to use the extra disk (/dev/sdb) attached to the new compute node to also scale out storage. To do so, remain in /etc/ceph and run the following commands:
ceph-deploy disk zap 172.16.0.7:sdb
ceph-deploy osd create 172.16.0.7:sdb

Verify ceph is happy

You can check on this with a ceph osd tree, it should look like this:

image09

And the extra capacity should now show up in a rados df:

image02

Verify neutron and nova

Finally, verify neutron and nova are happy:

image11

Adding additional nodes

To add additional nodes, simply repeat the above process as many times as you wish.

The post How to build large scale Openstack environments in AWS and Google Cloud appeared first on The Ravello Blog.

LISP Leaf & Spine architecture with Arista vEOS using Ravello on AWS

$
0
0

Arista

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

This post discusses a Leaf and Spine data center architecture with Cisco Locator/ID Separation Protocol (LISP) based on Arista vEOS. It begins with a brief introduction to these concepts and continues to discuss how one can setup a fully functional LISP deployment using Ravello’s Network & Security Smart Labs. If you are interested running this LISP deployment, just open a Ravello account and add this blueprint to your library.

What is Locator/ID Separation Protocol (LISP)?

The IP is an overloaded construct and we use it to determine “who” and “where” we are located in the network. The lack of abstraction causes problems as forwarding devices must know all possible forwarding paths to forward packets. This results in large forwarding tables and the inability for end hosts to move and keep their IP address across layer 3 boundaries. LISP separates the concept of the host IP to the routing path information. The same way Domain Names System (DNS) solved the local host file problem. Its uses overlay networking concepts and a dynamic mapping control systems so its architecture looks similar to that of Software Defined Network (SDN).

LISP framework consist of data plane and a control plane. The control plane is the registration protocol and procedures, while the data plane is the encapsulation/decapsulation process. The Data Plane specifies how EID (end hosts) are encapsulated in Routing Locators (RLOCs) and the Control Plane specifies the interfaces to the LISP mapping System that provides the mapping between EID and RLOC. EID could be represented by IPv4, IPv6 or even MAC addresses. If represented by MAC it would be Layer 2 over Layer 3 LISP encapsulation. The LISP control plane is very extensible and can be used with other data path encapsulations such as VXLAN and NVGRE. Future blueprints will discuss Jody Scott (Arista) and Dino Farinacci (LISP author) workings towards a LISP control and VXLAN data plane, but for now, let's build a LISP cloud with LISP standards inheriting parts of that blueprint.

What does LISP enable?

LISP enables end hosts (EID) to have the ability to move and attach to new locators. The host has a unique address but the IP address does not live in the subnet that corresponds to its locations. It is not location locked. You can pick up the endpoint and move it anywhere. For example, smartphones can move around from Wifi to 3G to 4G. There are working solutions to operate an open LISP ecosystems (Lispers.net) that allows IP address to move around the data center and across multi vendors, while keepings its IP address. No matter where you move to the endpoint IP address will not change.

At an abstract layer the EID is the “who” and the Locator is the “where the who is”.

Abstract Layer

Leaf & Spine Architecture

Leaf and Spine architectures are used to speed up connectivity and improve bandwidth between hosts. CLOS (Common Lisp Object System) is a relatively old concept but it does go against what we have been doing in traditional data centers.

Traditional data centers have three layers – core, aggregation and access layer with some oversubscription between the layers. The core is generally Layer 3 and access being Layer 2. If Host A needs to communicate with Host B, the bandwidth available to that host depends on where the hosts are located. If the hosts are connected to the same access (ToR) switch, traffic can be locally switches. But if a host needs to communicate to another host via the aggregation or core layer it will have less bandwidth available due to the oversubscription ratios and aggregations points. The bandwidth between the two hosts depends on the placements. This results in a design constraint as you have to know in advance where to deploy servers and services. You do not have the freedom to deploy servers in any rack that has free space.

The following diagram displays the Ravello Canvas settings for the leaf and spine design. Nodes labelled “Sx” are spine nodes and “Lx” are the leaf nodes. There are also various computes node representing end hosts.

image03

What we really need are equidistant endpoints. The placement of VM should not be a concern. Wherever you deploy a VM, it should have the same bandwidth to any other VM. Obviously, there are exceptions with servers connected to the same ToR switch. The core should also be non blocking so inbound and outbound flows are independent. We don't want an additional blocking element in the core. Networks should also provide unlimited workload placement and the ability to move VM around the data center fabric.

Datacenter architectures that are three tiered are not quite as scalable and place additional complexity for provisioning. You have to really think about where things are in the data center to give the user the best performance. This increases the costs as you have certain areas of the data center that are underutilized. Underutilized servers lose money. To build your data center as big as possible with equidistance endpoints you need to flatten the data center build and leaf and spine architecture.

I have used Ravello Network & Security Smart Lab to set up a large leaf and spine architecture based on Arista vEOS to demonstrate LISP connectivity. Ravello gives you the ability to scale to very large virtual networks, which would have difficult to do in a physical environment. Implementing a large leaf and spine architecture in a physical lab would require lots of time, rack space and power – but with Ravello, it is a matter of a few clicks.

Setting up LISP cloud on Ravello

The core setup on Ravello, consists of 4 Spine nodes. These nodes provide the connectivity between other functional block within the data center and provide the IP connectivity between end hosts. The core should forward packets as fast as possible.

The chosen fabric for this design was Layer 3 but if the need arises we can easily extend layer 2 segments with VXLAN overlay. Kindly see previous post on VXLAN to bridge Layer 2 segments. The chosen IP routing protocol is BGP and BGP neighbors are set up between spines and leaf nodes. BGP not only allows you to scale networks but it also decreases network complexity. BGP neighbors are explicitly defined and policies are configured per neighbor. Offering deterministic design. Another common protocol for this design could be OSPF, with each leaf in a stubby area. Stubby areas are used to limit route propagation.

The Leaf nodes connect hosts to the core and they are equivalent to the access layer. They are running Arista vEOS and support BGP to the spine. We are using 4 leaf nodes located in three different racks.

XI is the management JUMP host and enabled for external SSH connectivity. It is used to manage the internal nodes and its from here you can SSH to the entire network.

The following diagram displays access from XI to L5. Once on Leaf 5 we issue commands to display BGP peerings. The leaf nodes run BGP with the Spine nodes.

image04

We also have 4 Compute nodes in three racks. These nodes simulate end hosts and they are running Ubuntu. Individual devices do not have external connectivity so in order to access via local SSH client you must first SSH to XI.

LISP Configuration

LISP is enabled with the lisp.config file which is one C1, C2, L5 and L6. The software is Python based. It can be found in the directory listed below. If you need to make changes to this file or view its contents, enter Bash mode within the Arista vEOS and view with the default text viewer.

image02

None of the Spine nodes are running the LISP software and they transport IP packets with traditional means i.e they do not encapsulate packets in UDP and carry out any LISP functions. Leaf nodes L5 and L6 perform LISP XTR functions and carry out the encapsulation and decapsulation.

The diagram below displays the output from a tcpdump while in Bash mode. ICMP packets are sent from LISP source loopbacks of C9 (5.5.5.5) to C11 (6.6.6.6). These IP addresses are permitted by the LISP process to trigger LISP encapsulation. You will need to ping this source and destination to trigger the LISP process. All other traffic flow are routed normally.

image05

C1 & C2 are the LISP mapping servers and perform LISP control plane services. The following wireshark captures display the LISP UDP encapsulation and control plane map registers requests to 172.16.0.22.

image00

Before you begin testing, determine that the LISP process have started on the C1, C2, L5 and L6 with the command ps -ef | grep lisp. If it does not respond with 4 files, restart the LISP process with the command ./RUN-LISP.

Conclusion

LISP in conjunction with a Leaf-Spine topology helps architect efficient & scalable data-centers. Interested in trying out the LISP Leaf-Spine topology mentioned in this blog? Just open a Ravello account and add this blueprint to your library.

I would like to thank Jody Scott and Dino Farinacci for collaborating with me to build this blueprint.

The post LISP Leaf & Spine architecture with Arista vEOS using Ravello on AWS appeared first on The Ravello Blog.

Big Switch Labs – Running self-service, on-demand VMware vCenter/ESX and OpenStack based Open SDN Fabric demo environments in AWS and Google Cloud

$
0
0

big-switch-labs-logo

Author:
Sunit Chauhan
Director, Big Switch Labs

At Big Switch Networks, we are taking key hyperscale data center networking design principles and applying them to fit-for-purpose products for enterprises, cloud providers and service providers. Our Open SDN Fabric products built using bare metal switching hardware and centralized controller software, deliver the simplicity and agility required to run a modern a data center network. Through seamless integration and automation with VMware (vSphere/NSX) and OpenStack cloud management platforms, virtualization and networking teams are now able to achieve 10X operational efficiencies compared to the legacy operating models.

In addition to product innovation, we are also very focused on enabling our customers and partners to learn about the latest technology advances in the networking/SDN industry and make informed decisions without a ton of time-investment. Towards that end, Ravello Systems has been a great partner; enabling us to achieve that goal through Big Switch Labs – an online portal that lets you try VMware or OpenStack Networking real-time, on-demand and for Free!

Big Switch Labs

For the past few months Big Switch Networks has employed Ravello Systems platform to run demos of the Big Cloud Fabric integration with VMware vCenter and OpenStack, exposed through Big Switch Labs. The unique nested virtualization capabilities of Ravello Systems allows us to provision, within few minutes, complete VMware vCenter/ESX and OpenStack demo environments in the public clouds AWS and Google Cloud . The demos are provisioned from blueprints, the term Ravello uses to describe a snapshot of an entire multi virtual machine (VM) application, along with the full specification of the network that interconnects the set of VMs.

To experience modern, highly automated data center networking, check out the following, on-demand modules on Big Switch Labs. These modules include access to the production-grade Cloud Management software, Big Cloud Fabric Controller as well as the simulated physical networking topology:

  • VMware vCenter Integration with Big Cloud Fabric

    Experience the seamless integration of VMware vCenter and Big Cloud Fabric. Users provision virtual distributed switches and port groups in the vSphere Web Interface and observe the automated provisioning of the networking infrastructure from the BCF Controller GUI dashboard.
  • OpenStack Integration with Big Cloud Fabric

    Get hands-on experience with the seamless integration of OpenStack and Big Cloud Fabric (P+V Edition) using Big Switch’s Neutron plugin. Users provision OpenStack projects and networks using the Horizon GUI and observe the automated provisioning of the physical and virtual networking infrastructure from the BCF Controller GUI dashboard. Explore the latest Big Switch enhancements to the OpenStack Horizon dashboard.

I invite you to sign-up and experience the simplicity of managing, provisioning and troubleshooting data center networks in minutes. And yes, its available now and its free!

Sign-up for Big Switch Labs

The post Big Switch Labs – Running self-service, on-demand VMware vCenter/ESX and OpenStack based Open SDN Fabric demo environments in AWS and Google Cloud appeared first on The Ravello Blog.

Building an Openstack lab from scratch with PackStack on AWS and Google Cloud – Installing OpenStack via Packstack

$
0
0

OpenStack Cloud Software

Packstack is meant to be a really easy way to install OpenStack - and it is. It doesn't do quite a few things you should probably do for a production instance and the config it produces has a tendency towards there being only one of a lot of things that should really be clustered or at least have some form of HA. But it works - for messing around it's great. You get some relatively sane configs and setups that work which you can reference and mess with and it will also scale horizontally surprisingly far. The underlying puppet modules it uses are also pretty useful, and you can go in after it and fix its flaws.

Construct the blueprint

You can follow along with this entry, copy this setup from repo, or do it yourself.

image05

If you do it yourself, the assumption for this and future entries is that there are 4 nodes: 1 controller, 1 neutron node and 2 compute:

  • The controller node should be configured for 3 networks:
    • 192.168.0.0 - the “public network”
    • 172.16.0.0 - “the ceph network”
    • 10.0.0.0 - the openstack network.
    • 80 and 443 should be defined in services and set to the public network.
  • The neutron node should have 2 networks - the public and openstack network. It should have extra IPs in the 192.168.0.0 space and have IP ANY service rules set up to them.
  • The compute nodes should have 2 networks - the openstack and ceph networks.
  • There should be a shared ssh root keypair for convenience.

Additionally cloud-init should have hostnames turned off, and hostnames should be statically configured. If this isn’t done you’ll get inconsistency when copying things around.

Install the RDO repo and Packstack

On the controller node, install the RDO repo and packstack:
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y openstack-packstack

Perform the packstack install

packstack --os-horizon-ssl=y --os-cinder-install=n --os-swift-install=n --default-password=openstack --os-controller-host=10.0.0.3 --os-compute-hosts=10.0.0.4,10.0.0.5 --os-network-hosts=10.0.0.6 --amqp-host=10.0.0.3 --mariadb-host=10.0.0.3 --mongodb-host=10.0.0.3 --redis-master-host=10.0.0.3 --provision-demo=n --ntp-servers=0.north-america.pool.ntp.org,1.north-america.pool.ntp.org,2.north-america.pool.ntp.org,3.north-america.pool.ntp.org --gen-answer-file=/root/answers.txt
packstack --answer-file=/root/answers.txt

This install will take 30+ minutes. Make sure you don't have an autostop job that will stop it and grab a bite to eat or some such. If everything goes well it should finish with something that looks like this:

image15

When it's done, you should be able to access the horizon dashboard:

image10

We're using a self-signed cert so it will warn you, that's fine.

From the dashboard we'll do the rest of the config - but first, let's finish up the CLI setup. If you can, it's generally a good idea to snapshot everything before this next step.

Configure the Neutron Node

Connect to the neutron node and start by configuring the OVS bridge. If you ever find yourself without internet access and need to do an OVS config - there's some really, really good documentation that gets installed along with OVS at /usr/share/doc/openvswitch-2.3.1/README.RHEL.

/etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.0.3
NETMASK=255.255.0.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
ONBOOT=yes

Now reconfigure neutron to use br-ex:

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan

Reboot. You should be able to connect into neutron via ssh.

Install an image from the command line into glance

Hop back over to the controller and install the CirrOS image into glance:

curl -o cirros-0.3.4-x86_64-disk.img http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
. ~/keystonerc_admin
glance image-create --name cirros-0.3.4 --disk-format qcow2 --container-format bare --is-public True --file /root/cirros-0.3.4-x86_64-disk.img --progress

Set Up the Tenants and Users

Log in to the dashboard as user admin, password openstack.

Navigate to identity, projects:

image28

Create a new project called tenant (or whatever you want really):

image01

And create a user associated with that project:

image17

Log out from the admin user.

Configure Networking

As the user you just created

Log in as the user user and navigate to networks:

image11

Create the private network:

image22

image06

image19

And the public network:

image23

image20

image14

To make the public network externally accessible, log out and log back in as admin.

As the “admin” User

Navigate to the network administration section of the dashboard:

image25

And edit the public network - set it to an external network:

image27

Log out of admin and log back over to user

As the “user” User

Navigate to the router section of networking:

image02

Create the router:

image16

Add the private interface to the router:

image13

And you should be able to view the networking setup by navigating to network topology:

image26

Access and Security

Before starting an instance there are a few things we need to do:

Allocate Floating IPs

If you copied the blueprint, you should go ahead and allocate the 3 floating IPs to the tenant project now:

image00

Set Up a Security Group

If you want SSH or ICMP inbound you’ll need to configure a security group. Here’s an example of an SSH and ICMP security group rules:

image30

Creating an Instance

Navigate to instances and click launch instances. Set up a cirrOS image, m1.tiny, with the ssh and icmp security group and attached to the private network:

image04

image21

image29

When you click launch, it should look like this:

image12

The instance task will go from scheduling -> spawning -> active, and if you click through the instance and view the log, you should see it boot successfully.

Accessing an Instance from the Internet

Assign the thing a floating ip by clicking associate floating-ip:

image03

And the floating ip should display in the instances view:

image24

To verify this worked correctly, ping the floating IP from the controller node:

image18

Now, get the IP association from the ravello dashboard. 192.168.0.33 maps to eth0/1 on neutron via our blueprint:

image09

Your IP will differ. From the outside world, verify you can ping and ssh to that IP:

image08

image07

Final Thoughts

There are a number of things left undone here. Unlike the one-click blueprint, swift and cinder are not configured, nor is ceph. We’ll be going into these in a future entry.

The final form of this blueprint is available here on repo. Many of the minor issues with the one-click persist on copying it(these are not present if you do the install from scratch) - IE: You may need to hard-reboot the test instance for everything to come up correctly.

The post Building an Openstack lab from scratch with PackStack on AWS and Google Cloud – Installing OpenStack via Packstack appeared first on The Ravello Blog.

Cisco enables customers to test hybrid cloud designs using Ravello

$
0
0

Cisco

We are excited to share that Cisco Systems is working with Ravello Systems to enable its customers to test hybrid cloud designs with Cisco Cloud Services Router (CSR1000v) in a full-featured real world deployment environment. Users just need to open a Ravello account and add the fully functional CSR1000v ‘blueprint’ from Ravello to their library to gain access.

With Cisco Cloud Services Router available on Ravello, users can

Using Ravello, Cisco customers can now easily test a small network in the cloud, simulate a hybrid cloud environment by extending physical datacenter to AWS or Google cloud, or create a scalable virtual networking lab on any cloud on-demand with a couple of clicks.

Read more on Cisco’s blog on how to get your Cisco CSR 1000v environment running on Ravello.

The post Cisco enables customers to test hybrid cloud designs using Ravello appeared first on The Ravello Blog.

VSAN 6.1 environment on AWS and Google Cloud

$
0
0

itq

Install and run VSAN 6.1 environment for sales demo, POC and training labs on AWS and Google Cloud

With the new release of VSAN 6.1, quite a few people are likely interested in installing this new version, to test out the new features and to showcase their storage management products working with this new release. With Ravello, you can do this without requiring a prohibitive physical test setup (3 hosts, with SSD and storage). You can setup and run multi node ESXi environment in AWS and Google Cloud, then configure VSAN 6.1 and save the setup as a bluerpint in Ravello. If you are an ISV, you can then run your appliances directly on Ravello or on top of ESXi in this setup and build a demo environment in public cloud.

You can provide access to this blueprint to your sales engineers, who can then on-demand provision demo lab in minutes. You can also setup VSAN 6.1 virtual training labs for students on AWS and Google Cloud, without the need for physical hardware.

Setup Instructions

To set up this lab, first we start off with the following:

  • 1 vCenter 6.0U1 windows server
  • 2 clusters consisting of 3 ESXi hosts each

image00

If you want to, you could start off with a single cluster of 3 hosts, but this setup also allows us to test integration with products like vSphere replication and Site recovery manager in the future, while also being able to expand to 4 hosts per cluster very quickly to test new VSAN features such as failure domains or stretched clusters.

Refer to following blog on how to setup ESXi hosts on AWS and Google Cloud with Ravello.

Each host has the following specs:

  • 2 vCPU
  • 8 GB memory
  • 2 NIC (1 Management, 1 VSAN)
  • 4 additional disks on top of the OS disk, 100GB each. One of these disks will be used as flash drive, the rest will serve as capacity disks

After publishing our labs and installing the software (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of VSAN.

Starting with VSAN 6.1, the only thing we actually need to do for this is to open the vSphere web client, open the VSAN configuration for the cluster and mark the first disk of each host as SSD. This is because the underlying ravello platform reports the disk to ESXi as spindle storage, and we need at least one flash disk for VSAN to work.

If you want to test the all-flash features of VSAN, you’d previously have to either use the ESXi shell/SSH or use community tools to configure SSD disks as capacity disks. With VSAN 6.1, this is all supported from the web client if you have the correct VSAN license. Still, sometimes the community tool can be useful if you have a large amount of hosts or clusters and don’t want to manually mark each disk as SSD. While you could script this yourself through powershell or SSH, the tool of choice for this is the VSAN All-Flash configuration utility by Ravlinson Rivera, published on his blog Punching Clouds.

Installation

Start by installing vSphere as normal. For vCenter, i’ve chosen to use the windows version since this is the easier one to install, but if you install the VCSA (either nested or by import an existing VCSA as OVF in Ravello) that works equally well. From an installation point of view, there is no difference between the two.

As you can see, i’ve created the following setup:

image02

By default, VSAN disk claiming is set to automatic. If you want to ensure that new disks are not added to capacity automatically, you’ll have to set this to manual when enabling VSAN. If you do select to automatically add capacity, ensure that your disks are marked as flash and configured correctly before enabling VSAN on your cluster. For automatic assignment, follow the rest of this blog before enabling VSAN on the cluster level.

image01

First we have to configure our second interface with a static IP address and mark the interface as usable for VSAN traffic. For each ESXi host, go to the manage tab and open Networking -> VMKernel adapter. Select the "Add Host Networking" option, choose "VMKernel Network adapter", create a new virtual switch and add the second nic (vmnic1) to the standard switch).

image05

After this, select "Virtual SAN Traffic" under the header "available services" and configure an IP address.

Before we can start using VSAN, you’ll have to mark one (or all) of the disks as Flash. If you want to use the standard VSAN configuration, mark the first disk on each ESXi host as flash by going to the host configuration, then storage->storage devices. Select the last disk and click the “mark disk as flash” (the green square button with the f). Repeat this process for each host that you want to use in your VSAN cluster.

image03

After marking a disk as flash on each host, you can enable VSAN. If you’ve left the VSAN settings default, the disks will automatically be consumed to create a VSAN datastore. If you’ve set the VSAN settings to only manually consume disks, you’ll need to assign the disks to the VSAN storage pool. This can be done by going into the cluster VSAN configuration, selecting disk management and clicking the “create a disk group” button for each host.

image04

Afterwards, you should see a healthy green status and have 4 disks assigned to a single disk group on each host.

image06

Saving your environment as a Blueprint

Once you have installed your VSAN environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

The post VSAN 6.1 environment on AWS and Google Cloud appeared first on The Ravello Blog.


OpenStack environment with ceph and cinder on AWS and Google Cloud: Running a fully functional Openstack environment on AWS and Google Cloud for sales demo, POC and training labs

$
0
0

Ceph

Welcome to the final part of my OpenStack series on constructing and scaling OpenStack models in AWS / Google Cloud via Ravello (we started several weeks ago with the one-click blueprint referenced here).

In this entry we will be wrapping up by installing ceph and configuring cinder and nova to use it as backing for volumes. We’re using ceph here as a distributed, highly resilient object store and more specifically using rbd (rados block device) to back those volumes(basically virtual machine disks).

If you’re just joining us, you can start from scratch, pick it up at the packstack install, or just dive right in by starting from this blueprint. I would recommend starting at the packstack install.

Regardless, the assumption here is you are in the same state as the end of the packstack install - IE you have either done it or have started from the blueprint.

Install ceph

Create and install the ceph repository

Before we can install ceph, we have to tell the OS where to get it. Starting off on the controller node, create a ceph repository. This isn’t production so we’re going to do some bad things like ignore the GPG Key, obviously don’t do this in prod.

/etc/yum.repos.d/ceph.repo

[ceph]
name=Ceph packages
baseurl=http://ceph.com/rpm-hammer/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Now copy that file onto the two compute / OSD nodes:

scp /etc/yum.repos.d/ceph.repo compute-1:/etc/yum.repos.d/.
scp /etc/yum.repos.d/ceph.repo compute-2:/etc/yum.repos.d/.

Install ceph and ceph-deploy

On the controller, go ahead and install ceph and ceph-deploy:

yum install -y ceph ceph-deploy

The hop onto the compute nodes and install both ceph and libvirt(libvirt isn’t installed by default with a packstack install and we’ll need it later for nova).

yum install -y ceph libvirt

Perform the initial configuration

We’re going to take advantage of ceph-deploy for this because it works really well and also for the sake of expediency.

On the controller create the initial ceph configuration:

cd /etc/ceph
ceph-deploy new controller --cluster-network 172.16.0.0/16 --public-network 10.0.0.0/16
echo "osd pool default size = 2" >> /etc/ceph/ceph.conf

Note that the osd pool default size of 2 here is not recommended. It’ll work as a proof of concept, but it’s seriously lacking in resilience.

Still on the controller create the mon and deploy the osds (compute-1 / compute-2):

ceph-deploy --overwrite-conf mon create-initial controller
ceph-deploy osd create --zap-disk compute-1:/dev/sdb
ceph-deploy osd create --zap-disk compute-2:/dev/sdb

Verify ceph installed and deployed correctly

With the following commands:

ceph osd tree
rados df
rados -p rbd ls

ceph osd tree should show compute-1 and compute-2 as up and possessing osd.0 and osd.1:

ID   WEIGHT      TYPE NAME          UP/DOWN   REWEIGHT      PRIMARY-AFFINITY
-1   0.06000     root default
-2   0.03000     host compute-1
 0   0.03000     osd.0              up        1.00000       1.00000
-3   0.03000     host compute-2
 1   0.03000     osd.1              up        1.00000       1.00000
 

rados df should show the default pool, rbd, with a raw size of about 73GB:

pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
rbd                        0            0            0            0           0            0            0            0            0
  total used           67916            0
  total avail       73294484
  total space       73362400

rados -p rbd ls should show nothing. If it displays a fault as follows:

2015-09-19 17:47:31.317020 7fad5c6d1700  0 -- 10.0.0.3:0/1004452 >> 10.0.0.4:6800/3335 pipe(0x3b92710 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x3b969b0).fault

Try restarting ceph on the node in question (IE: above it’s compute-1). Similarly if any node shows as down in the ceph osd tree. You can do this either via an /etc/init.d/ceph restart or systemctl restart ceph. If this does not fix it, comment on this entry or the blueprint and we’ll look at why.

Install Cinder

On all 3 nodes (the controller, compute-1, and compute-2) install the packages with a:

yum install -y openstack-cinder

Create the cinder user and assign roles in keystone

On the controller do the following:

. /root/keystonerc_admin
keystone user-create --name cinder --tenant services --pass openstack --email cinder@localhost --enabled true
keystone user-role-add --user cinder --role admin --tenant services
keystone user-role-remove --user cinder --role _member_ --tenant services

Create the services and endpoints in keystone

For each of the following commands sets, you will get back an id field when you do the service create. YOURID should be deleted and replaced with this id.

Set up the v1 (it’s due to be deprecated, but you may want it) api service and endpoint:

keystone service-create --type volume --name cinder --description Cinder\ Service
keystone endpoint-create --region RegionOne --publicurl=http://10.0.0.3:8776/v1/%\(tenant_id\)s  --internalurl=http://10.0.0.3:8776/v1/%\(tenant_id\)s --adminurl=http://10.0.0.3:8776/v1/%\(tenant_id\)s --service-id=YOURID

And the v2 api service and endpoint:

keystone service-create --type volumev2 --name cinder --description Cinder\ Service\ v2
keystone endpoint-create --region RegionOne --publicurl=http://10.0.0.3:8776/v2/%\(tenant_id\)s  --internalurl=http://10.0.0.3:8776/v2/%\(tenant_id\)s --adminurl=http://10.0.0.3:8776/v2/%\(tenant_id\)s --service-id=YOURID

Prep the database

By creating the database and the database user on the controller node:

mysql -e "create database cinder;"
mysql -e "grant ALL on cinder.* TO cinder@localhost IDENTIFIED BY 'openstack';"
mysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';"

edit the final section of /etc/cinder/api-paste.conf

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
admin_tenant_name=services
auth_uri=http://10.0.0.3:5000/v2.0
admin_user=cinder
identity_uri=http://10.0.0.3:35357
admin_password=openstack

Set /etc/cinder/cinder.conf to the following:

[DEFAULT]
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=openstack
osapi_volume_listen=0.0.0.0
osapi_volume_workers=1
api_paste_config=/etc/cinder/api-paste.ini
glance_host=10.0.0.3
enable_v1_api=True
enable_v2_api=True
storage_availability_zone=nova
default_availability_zone=nova
auth_strategy=keystone
enabled_backends=rbd
use_syslog=False
debug=False
log_dir=/var/log/cinder
verbose=True
amqp_durable_queues=False

[database]
idle_timeout=3600
max_retries=10
retry_interval=10
min_pool_size=1
connection=mysql://cinder:openstack@10.0.0.3/cinder

[oslo_messaging_rabbit]
rabbit_host=10.0.0.3
rabbit_port=5672
rabbit_hosts=10.0.0.3:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False

[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = volumes

Note the RBD section won’t be used on the controller, but it doesn’t do any harm being there and simplifies things later.

Finish setting up the database now:

cinder-manage db sync

Enable and start cinder (api / scheduler)

On the controller node turn on cinder-api and the cinder-scheduler

systemctl enable openstack-cinder-api openstack-cinder-scheduler
systemctl start openstack-cinder-api openstack-cinder-scheduler

Verify cinder came up

Still on the controller, execute a:

openstack-status

This should have a section that looks like the following:

== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                inactive  (disabled on boot)
openstack-cinder-backup:                inactive  (disabled on boot)

If it doesn’t, make sure your /etc/cinder/cinder.conf and /etc/cinder/api-paste.ini resemble the ones above. If they do and it still doesn’t work, the relevant logs are /var/log/cinder/api.log and /var/log/cinder/scheduler.log.

Copy the cinder api config to the compute / osd nodes

Still on the controller execute the following:

scp /etc/cinder/api-paste.ini compute-1:/etc/cinder/.
scp /etc/cinder/api-paste.ini compute-2:/etc/cinder/.

We will be copying /etc/cinder/cinder.conf after finishing off the ceph configuration.

Configure Ceph as a Cinder / Nova backend

First we’ll need to create the pool. Both nova and cinder will be using the same ceph pool, volumes which will be authenticated via cephx as, you guessed it, volumes. Navigate to /etc/ceph and and set this up with the following:

cd /etc/ceph
ceph osd pool create volumes 128
ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes' > /etc/ceph/ceph.client.volumes.keyring

A rados -p volumes ls should return nothing without an error now.

Remove the tab in /etc/ceph/ceph.client.volumes.keyring

[client.volumes]
key = AQC87f1VfizmOBAAln/2CImzJ9J9Ro4mysHQew==

Copy the key to the compute nodes

scp /etc/ceph/ceph.client.volumes.keyring compute-1:/etc/ceph/.
scp /etc/ceph/ceph.client.volumes.keyring compute-2:/etc/ceph/.

Generate the client libvirt/virsh secret configuration

Still on the controller, pull the key out and store it in root:

ceph auth get-key client.volumes > /root/client.key

For the secret entry, you’ll need a uuid, you can use uuidgen for this purpose or use the one I got 8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc. If you choose to generate your own, substitute all further instances of 8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc with your own. Armed with your uuid, create a secret.xml file.

/root/secret.xml


  8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc
  
    client.volumes secret
  

Still armed with your uid append the following to the bottom of /etc/cinder/cinder.conf

rbd_secret_uuid = 8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc

You can do this with a:

echo "rbd_secret_uuid = 8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc" >> /etc/cinder/cinder.conf

Deploy the configuration and keys to the compute nodes

Copy cinder.conf onto compute-1 and compute-2:

scp /etc/cinder/cinder.conf compute-1:/etc/cinder/.
scp /etc/cinder/cinder.conf compute-2:/etc/cinder/.

And the ceph secrets:

scp /root/client.key compute-1:/root/.
scp /root/secret.xml compute-1:/root/.
scp /root/client.key compute-2:/root/.
scp /root/secret.xml compute-2:/root/.

Configure ceph backed cinder on the compute nodes

On both compute-1 and compute-2, execute all of the following.

First, make sure all the ownerships are correct:

chown -R cinder /etc/cinder
chown cinder /etc/ceph/ceph.client.volumes.keyring

Next, set up the virsh secret:

virsh secret-define --file /root/secret.xml
virsh secret-set-value --secret 8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc --base64 $(cat /root/client.key)

Enable and start cinder:

systemctl enable openstack-cinder-volume.service
systemctl start openstack-cinder-volume.service

Move onto the next compute node and the next section when out of compute nodes.

Verify Cinder

As you configure each of the compute nodes you can make sure a systemctl status openstack-cinder-volume shows cinder as active. To actually verify everything works however you need to back to the controller and create a test volume:

. ~/keystonerc_admin
cinder create --display-name test 2

A subsequent execution of a cinder list should show the volume as available:

+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| d9d421d5-38f3-4b3b-9ee4-08f872fdaeaa | available |     test     |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

note the id field here for cleanup later, it’s a uuid and so will vary on each run.

A rados -p volumes ls should show files now:

rbd_directory
rbd_header.374d19f2cec6
rbd_id.volume-d9d421d5-38f3-4b3b-9ee4-08f872fdaeaa

And a rados df should show some capacity usage:

pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
rbd                        0            0            0            0           0            0            0            0            0
volumes                    1            3            0            0           0            5            3            7            1
  total used           70052            3
  total avail       73292348
  total space       73362400

Finally clean up after yourself. Delete the cinder volume by its id:

cinder delete d9d421d5-38f3-4b3b-9ee4-08f872fdaeaa

Configure Nova to be backed by Ceph

This is actually almost anticlimactic. On each of the compute nodes, modify /etc/nova/nova.conf and set the following parameters:

inject_partition=-2
images_type=rbd
images_rbd_pool=volumes
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=volumes
rbd_secret_uuid=8597ac3e-614b-4ea9-8a0e-d4370d0cf9fc

Restart nova-compute

systemctl restart openstack-nova-compute

Verify nova

On each compute node, you can verify nova is up with a systemctl status openstack-nova-compute. Problems will appear in /var/log/nova/nova-compute.log (and of course journalctl).

On the controller a nova service-list with a sourced keystonrc_admin should show all smiles. Just to make sure though, try starting an instance. If it spawns, nova / ceph are fine (virtual wire problems and what not point to neutron).

Next steps / final thoughts

This is the final entry in this series, I probably won’t be revisiting OpenStack for a bit outside the comments sections, and there’s a lot left unsaid and undone(IE: swift, glance) - but at this point you should have a basic foundation. Going over to RDO or Mirantis, or even the OpenStack site itself and diving into the docs shouldn’t be quite as intimidating. We may have built toys, but they’re hopefully instructional ones, models you can tinker with and are comfortable building, taking apart and looking at. They even scale, almost frighteningly far actually.

The natural springboards from here are actually scaling things as you would in production on whitebox hardware - replacing the single instance rabbitMQ and mariaDB with HA/Clustered versions, ManageIQ / CloudForms, setting up load balancers, tweaking Ceph / CRUSH, setting up availability zones, yada yada. Also, none of this is deterministically provisioned, config managed or containerized(⅔ of these at the minimum being things you really want to do for anything) and I completely ignored important things like policy, metrics collection, logging, trending, and the like.

The post OpenStack environment with ceph and cinder on AWS and Google Cloud: Running a fully functional Openstack environment on AWS and Google Cloud for sales demo, POC and training labs appeared first on The Ravello Blog.

How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 1

$
0
0

mpls

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Service Providers around the globe deploy MPLS networks using label-switched paths to improve quality of service (QoS) that meet specific SLAs that enterprises demand for traffic. This is a two-part post – Part 1 (this post) introduces MPLS constructs and Part 2 describes how to setup a fully-functional MPLS network using Ravello’s Network Smart Lab. Interested in playing with this 14 node Service Provider MPLS deployment – just add this blueprint from Ravello Repo.

Multiprotocol Label Switching (MPLS)

One could argue that Multiprotocol Label Switching (MPLS) is a virtualization technique. It is used to build virtual topologies on top of physical topologies by creating connections between the network edges, explicitly called tunnels. MPLS is designed to reduce the forwarding information managed by the internal nodes of the network by tunneling through them. MPLS architectures build on the concept of complex edges and simple cores, allowing networks to scale and serve millions of customer routes.

MPLS changed the way we think about control planes. It pushed most of the control plane to the edge of the network. The MPLS Provider (P) nodes still run a control plane but the complex decision making is now at the edge. In terms of architecture and scale, MPLS leads to very scalable networks and reduces the challenges of distributed control plane. It allows service providers to connect millions of customers together and place them into separate VRF containers allowing overlapping IP addressing and independent topologies.

The diagram below represents high level components of a MPLS network. Provider Edge (PE) nodes are used to terminate customer connections, Provider (P) nodes label switch packets and Route Reflectors (RR) are used for IPv4 and IPv6 route propagation.

[caption id="attachment_5912" align="aligncenter" width="940"]Service Provider MPLS Network Figure 1 Service Provider MPLS Network
[/caption]

How does MPLS work?

Virtual routing and forwarding (VRF) instances are like VLANs, except they are Layer 3 and not Layer 2. In a single VRF-capable router you have multiple logical routers with one single management entity. The virtual VRF is like a full blown router. It has its own routing protocols, topology, and independent set of interfaces and subnets. If you want to connect multiple routers with VRFs together, you can use either Layer 2 trunk, Generic Routing Encapsulation (GRE) or MPLS.

First, you need an interface per VRF on each router, which means you need a VLAN or GRE tunnel for each VRF, also a VRF routing protocol on every single hop. This is known as Multi-VRF. For example, with a Multi-VRF approach, if you have 4 routers in sequence you have to run two copies of the routing protocol on every single path. Every router in the path must have the VRF configured, resulting in numerous routing protocol adjacencies and convergence events occurring if a single link fails. Multi-VRF with its hop-by-hop configuration does not scale and should be used through a maximum of one hop. A more scalable solution is a full blown MPLS network, as described below.

An end-to-end MPLS implementation is more common than multi-VRFs. They build a Label Switched Path (LSP) between every ingress and egress router. To enable this functionality, you have to enable LDP on individual interfaces. Routers send LDP Hello messages and establish an LDP session over TCP. It is enabled only on Core (P to P, PE to P, P to RR) interface and not on user interfaces facing CE routers. Every router running LDP will assign a label to every prefix in the CEF table. These labels are then advertised to all LDP neighbors. Usually, you only need to advertise labels for the internal PE BGP next hops, enabling Label Switched Paths within the core.

Provider (P) and Provider Edge (PE) Nodes

MPLS shifted the way we think about control plane and pushed the intelligence of the network to the edges. It changed how the data plane and control planes interrelate. The decisions are now made at the edge of the network with devices known as Provider Edge (PE) routers. PE’s control the best-path decisions, perform end-to-end path calculation and any encapsulation/de-capsulations. The PE’s run BGP that uses a special address family called VPNv4. The Provider (P) routers sit in the core of the network and switch packets based on labels. They do not not contain end-to-end path information and require path information to the remote PE next hop address only. The PE nodes run the routing protocols with the CE’s. No customer running information should be passed into the core of the network. Any customer route convergence events are stopped at the PE nodes, protecting the core. The P’s run an internal IGP for remote PE reachability and LDP to assign labels to IP prefixes in the Cisco Express Forwarding (CEF) table.

The idea is to use a single protocol for all VRFs. BGP is the only routing protocol scalable enough and allows extra attributes added to prefixes making them unique. We redistribute routes from customers IGP or connected networks from the VRFs to Multiprotocol BGP (MP-BGP). You do the same at the other connected end and the BGP updates are carried over the core. As discussed, transport between the routers in the middle can be either layer 2 transport, GRE tunnel or MPLS with Label Switched Paths (LSP).

Route Distinguishers (RD) and Route Targets (RT)

MPLS/VPN networks have the concept of Route Distinguishers (RD) and Route Targets (RT).
The RD distinguishes one set of routes from another. It’s a number prepended to each route within a VRF. The number is used to identify which VRF the route belongs to. A number of options exist for the format of a RD but essentially it is a flat number prepended to a route.

The following snippet displays a configuration example of a RT and RD attached to VRF instance named VRFSite_A.

ip vrf VRFSite_A
rd 65000:100
route-target export 65000:100
route-target import 65000:100

A route for 192.168.101.0/24 in VRF Site_A is effectively advertised as 65000:100:192.168.101.0/24.

RT’s are used to share routes among sites. They are assigned to routes and control the import and export of routes to VRFs. At a basic level, for end-to-end communication, an export RT at one site must match an imported RT at the other site. Allowing sites to import some RT’s and export others, enables the creation of complex MPLS/VPN topologies, such as hub and spoke, full and partial mesh designs.

The following shows a wireshark capture of a MP-BGP update. The RT value is displayed as a extended community with value 65000:10. Also, the RD is specified as the value 65000:10.

image07

When you redistribute from VRF into VPNv4 BGP table, it prepends a 64-bit user configurable route distinguisher (RD) to every IPv4 address. Every IPv4 prefix gets 64-bits in front of it to make it globally unique. RD is not the VPN identifier. It is just a number that makes the IPv4 address globally unique. Usually, we use the 2-byte AS number + 4 byte decimal value.

Route Targets (RT) (extended BGP community) are not part of the prefix but attached to the prefix. As discussed, the RT attribute controls the import process to the BGP table. It tells BGP to which VRF should it insert the specific prefix.

The diagram below displays the generic actions involved with the traffic flow of a MPLS/VPN network. Labels are assigned to prefixes and sent as BGP update messages to corresponding peers.

image06

Multi-protocol Extensions for BGP

MP-BGP allows you to carry a variety of information. More recently, BGP has been used to carry MAC addresses with a feature known an EVPN. It also acts as a control and DDoS prevention tool, downloading PBR and ACL to the TCAM on routers with BGP FlowSpec. It is a very extensible protocol.

Label Distribution Protocol (LDP) is supported solely for IPv4 cores, which means to transport IPv6 packets we need to either use LDPv6, which does not exist or somehow tunnel packets across an IPv4-only core. We need a mechanism to extend IPv4 BGP to carry IPv6 prefixes. A feature known an 6PE solves this and allows IPv6 packets to be labelled and sent over a BGP IPv4 TCP connection. The BGP session is built over TCP over IPv4 but with the “send-label” command, allowing BGP to assign a BGP label (not LDP label) to IPv6 packets – enabling IPv6 transport over an IPv4 BGP session. In this case, both BGP and LDP are used to assign labels. BGP assigns a label to the IPv6 prefix and LDP assigns a label for remote PE reachability. This allows the enablement of IPv6 services over an IPv4 core. BGP is simply a TCP application carrying multiple types of traffic.

Conclusion

This post introduces the key concepts involved in creating a Service Provider MPLS network. Interested in building your MPLS network – read Part 2 of this article for step by step instructions on how to create a 14 node MPLS network using Ravello’s Network Smart Lab. You can also play with this fully functional MPLS network by adding this blueprint to your Ravello account.

The post How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 1 appeared first on The Ravello Blog.

How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 2

$
0
0

mpls

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Service Providers around the globe deploy MPLS networks using label-switched paths to improve quality of service (QoS) that meet specific SLAs that enterprises demand for traffic. This is a two-part post – Part 1 introduces MPLS constructs and Part 2 (this post) describes how to setup a fully-functional MPLS network using Ravello’s Network Smart Lab. Interested in playing with this 14 node Service Provider MPLS deployment – just add this blueprint from Ravello Repo.

The blueprint uses Multiprotocol Extensions for BGP (MP-BGP) to pass customer routing information with both Border Gateway Protocol (BGP) route reflection and full mesh designs. The MPLS core is implemented with OSPF as the IGP and Label Distribution Protocol (LDP) to distribute labels. To transport IPv6 packets over an MPLS IPv4 core, an additional mechanism known as 6PE is designed on a standalone 6PE route reflector. Labels are assigned to transport IPv4 and IPv6 packets across an IPv4 MPLS core.

Creating MPLS Service Provider Network on Ravello

I decided to use Cisco CSR1000v to create my MPLS Service Provider network on Ravello as it supports a strong feature set including MP-BGP, LDP, 6PE, and route reflection. CSR1000v is a fully featured Layer 3 router, and fulfilled all device roles (P, PE, RR and Jump) for the MPLS/VPN network.

I created a mini MPLS/VPN network consisting of 2 x P nodes, 8 x PE’s, 2 x IPv4 route reflectors, and 1 x 6PE route reflectors. A JUMP host was used for device reachability. The P nodes have core functionality and switch packets based on labels. The PE’s accepts customer prefixes and peer with either a route reflector or other PE’s nodes. The route reflectors negate the need for a full mesh in the lower half of the network.

Once the initial design was in place, I was able to drag and drop Cisco CSR1000v VM’s to build a 15 node design with great ease. I also expanded (on the fly) the core to a 4 node square design and scaled back down to two for simplicity. This type of elasticity is hard to replicate in the physical world. All device are accessed from the managment JUMP host. A local host file is created allowing you to telnet by name and not IP address e.g telnet p1, telnet PE2 etc.

The diagram below displays the physical interface interconnects per device. The core P1 and P2 are the hub for all connections. Every node connects to either P1 or P2.

Physical interface interconnects per device

The table below displays the management address for each node, and confirms the physical interconnects.

Device Name Connecting To Mgmt Address
PE1 P1 192.168.254.20
PE2 P2 192.168.254.21
PE3 P1 192.168.254.22
PE4 P2 192.168.254.23
PE5 P1 192.168.254.24
PE6 P1 192.168.254.25
PE7 P2 192.168.254.26
PE8 P2 192.168.254.27
RR1 P2 192.168.254.28
RR2 P2 192.168.254.29
RR3 P1 192.168.254.31
P1 P2,RR3,PE1,PE3 192.168.254.10
P2 P1,RR1,RR2,PE2,PE4 192.168.254.12
MGMT All Nodes. External

Logical Setup

In this lab, there are two types of BGP route propagation a) Full Mesh and b) Route Reflection.

A full mesh design entails all BGP speakers peering (neighbor relationship) with each other. If for some reason a PE node is left out of the peering, due to BGP rules and loop prevention mechanism, it will receive no routes. In a large BGP design, full mesh creates a lot of BGP neighbor relationships and hampers router resources. For demonstration purposes, PE1, PE2, PE3, and PE4 peer directly with each other, creating a BGP full mesh design.

For large BGP networks, designers employ BGP route reflection. In a route reflection design, BGP speakers do not need to peer with each and peer directly with a central control plane point, known as a route reflector. Route reflection significantly reduces the number of BGP peering sessions per device. Instead of each BGP speaker peering with each other, they peer directly with a route reflector. For demonstration purposes, PE5, PE6, PE7 and PE8 peer directly with RR1 and RR2 (IPv4 route reflectors).

Route reflectors

In summary, there are two sections of the network. PE1 to PE4 are in the top section and participate in a BGP full mesh design. PE5 to PE8 are in the bottom section and participate in a BGP Route Reflection design. All PE’s are connected to a Provider node, either P1 or P2. The PE’s do not have any physical connectivity connectivity to each other but they do have logical connectivity. The top and bottom PE’s cannot communicate with each other and have separate VRFs for testing purposes. However, this can be changed by adding additional peering with RR1 and RR2’s or by participating in the BGP full mesh design.

The 3rd BGP route reflector is called RR3 and serves as the 6PE device. Both PE1 and PE2 peer with the 6PE route reflector for IPv6 connectivity.

The Provider (P) nodes have interconnect address to ALL PE and RR nodes, assigned 172.16.x.x. The P-to-P interconnects are labelled 10.1.1.x. The IPv4 and IPv6 route reflectors are interconnected to the P nodes and assigned address 172.16.x.x. They do not have any direct connections to the PE devices. Following screenshot shows how all the Service Provider MPLS network nodes are setup on Ravello.

image01

BGP and MPLS Configuration

PE1, PE2, PE3 and PE4 are configured in a BGP full mesh. Each node is a BGP peer of each other. There are three stages to complete this design.

The first stage is to create the BGP neighbor, specify the BGP remote AS number, and source of the TCP session. Both BGP neighbors are in the same BGP AS making the connection a IBGP session and not an EBGP session. By default, BGP neighbor relationships are not dynamic and neighbors are explicitly specified on both ends. The command remote-as determines IBGP or EBGP relationship, “update-source Loopback100” sources the BGP session.

router bgp 100
bgp log-neighbor-changes
neighbor 10.10.10.x remote-as 100
neighbor 10.10.10.x update-source Loopback100

The second stage is to activate the neighbor under the IPv4 address family.

address-family ipv4
neighbor 10.10.10.x activate

The third stage is to active the neighbor under the VPNv4 address family. We also need to make sure we are sending extended BGP attributes.

address-family vpnv4
neighbor 10.10.10.x activate
neighbor 10.10.10.x send-community both

A test VRF named PE1 is created to test connectivity between PE1 to PE4. PE1 has test IP address of 10.10.10.10, PE2 has 10.10.10.20, PE3 has 10.10.10.30 and PE4 has 10.10.10.40. These addresses are reachable through MP-BGP and are present on the top half PE’s. The test interfaces are within the PE VRF and not the global routing table.

interface Loopback10
ip vrf forwarding PE1
ip address 10.10.10.x 255.255.255.255

The diagram below display the routing table for PE1 and the test results from pinging within the VRF. The VRF creates a separate routing table from the global table so when pinging one needs to make sure to execute the ping command within the VRF instance.

image00

PE5, PE6, PE7 and PE8 are configured as route reflector clients of RR1 and RR2. Each of these PE’s has a BGP session to both RR1 and RR2. RR1 and RR2 are BGP route reflectors configured within a cluster-ID for redundancy. To prevent loops a cluster-id of 1.1.1.1 is implemented. They reflect routes from PE5, PE6, PE7 and PE8, not PE1, PE2, PE3 and PE4.

The main configuration points for a route reflector design are on the actual route reflectors; RR1 and RR2. The configuration commands on the PE’s stay the same but the only difference is that they have single BGP peering to the route reflectors and not each other.

Similar, to the PE devices, the route reflector sources the TCP session from the loopback100, specifies the remote AS number to highlight if this is an IBGP or EBG session. The cluster-ID is used to prevent loops as the bottom half PE’s peer with two route reflectors.

router bgp 200
bgp cluster-id 1.1.1.1
bgp log-neighbor-changes
neighbor 10.10.10.x remote-as 200
neighbor 10.10.10.x update-source Loopback100

The PE neighbor is activated under the IPv4 address family

address-family ipv4
neighbor 10.10.10.x activate

Finally, the PE neighbor is activated under the VPNv4 address family. The main difference is that the route-reflector-client command is enabled to the neighbor relationship for the PE nodes. This single command enables route reflection capability.

address-family vpnv4
neighbor 10.10.10.x activate
neighbor 10.10.10.x send-community both
neighbor 10.10.10.x route-reflector-client

The following displays the PE8 test loopback of 10.10.10.80 within the test VRF PE2. The cluster-id of 1.1.1.1 is present in the BGP table for that VRF.

image03

RR3 is a standalone IPv6 route reflector. It interconnects with P1 with IPv4 addressing not IPv6. It does not serve the IPv4 address family and is used for IPv6-only. The send-label command labels IPv6 packets for transportation over an IPv4-only MPLS core. The command is configured on the PE side under the IPv6 address family.

The following snippet displays the additional configuration on PE1 for 6PE functionality. Note the send label command.

address-family ipv6
redistribute connected
neighbor 10.10.10.11 activate
neighbor 10.10.10.11 send-community extended
neighbor 10.10.10.11 send-label

The IPv6 6PE RR has similar configuration to the IPv4 RR, except the IPv6 address family is used, not the IPv4 address family. Note, the 6PE RR does not have any neighbors activated under the IPv4 address family.

The following snippet displays the configuration on RR3 (IPv6 6PE ) for PE1 neighbor relationship.

router bgp 100
bgp log-neighbor-changes
no bgp default ipv4-unicast
neighbor 10.10.10.1 remote-as 100
neighbor 10.10.10.1 update-source Loopback100

address-family ipv4
exit-address-family

address-family ipv6
neighbor 10.10.10.1 activate
neighbor 10.10.10.1 route-reflector-client
neighbor 10.10.10.1 send-label

RR3 serves only PE1 and PE2 and implements a mechanism known as 6PE. PE1 and PE2 are chosen as they are physically connected to different P nodes. A trace from PE1 to PE2 displays additional labels added for IPv6 end-to-end reachability. An additional label is assigned to the IPv6 prefix so it can be label switched across the IPv4 MPLS core. If we had configured a mechanism known as 6VPE (VPNv6) we would see a three label stack. However, in the current configuration of 6PE (IPv6 not VPNv6) we have 2 labels; A label to reach the remote PE (assigned by LDP) and another label for the IPv6 prefix (assigned by BGP). These two labels are displayed in the diagram below. - Label 18 and Label 41, representing a two label stack.

image02

The MPLS core consists of the P1 and P2 Provider nodes. These devices switch packet based on labels and run LDP to each other and to the PE routers. LDP is enabled simply with the mpls ip command under the connecting PE and P interfaces.

Interface Gix
mpls ip

OSPF Area 0 is used to pass INTERNAL routing information. There are no BGP or customer routes in the core. There are a number of ways to configure OSPF to advertise routes. For demonstration purposes this blueprint uses both ways. The core nodes only have Internal reachability information.

The snippets display both ways to configure OSPF; enabled under the interface or configured within the OSPF process.

interface GigabitEthernet
ip ospf 1 area 0

router ospf 1
network  0.0.0.0 area 0

The image below displays the MPLS forwarding plane with the show mpls forwarding-table command. The table display the incoming to outgoing label allocation for each prefix in the routing table. The outgoing action could either be a POP label or an outgoing label assignment. For example, there is a POP label action for PE1 and PE3 loopbacks as these two nodes are directly connected to P1. However, for PE2 and PE4, who are connected to the other P node, there is an outgoing label action of 18 and 20.

As discussed, OSPF is the IGP and we are running Area 0. The command show ip ospf neighbors displays the OSPF neighbors for P1. It should be adjacent to all the directly connected PE’s, P2 and RR3.

image04

Complete configuration for this setup can be accessed at this Github account.

Conclusion

This post walks through step by step instructions on how to create a 14 node MPLS network using Ravello’s Network Smart Lab. Interested in playing with this MPLS core network? Just open a Ravello account and add this fully functional MPLS network blueprint to your library.

The post How to build a large scale BGP MPLS Service Provider Network and model on AWS – Part 2 appeared first on The Ravello Blog.

Install and run VMware NSX 6.2 for Sales demo, POC and training labs on AWS and Google Cloud

$
0
0

vmware-nsx

In this blog post, we’ll discuss the installation of NSX 6.2 for VMware vSphere on AWS or Google Cloud through the use of Ravello.

NSX allows you to virtualize your networking infrastructure, moving the logic of your routing, switching and firewalling from the hardware infrastructure to the hypervisor. Software-defined networking is an essential component of the software-defined datacenter and is most likely the most revolutionary change since the creation of VLANs.

The biggest problem with installing NSX on a normal platform is that it can be quite resource-intensive, it requires physical network components, and the initial setup can be a bit time-intensive. By provisioning NSX on Ravello, we can install once and redeploy anytime, greatly reducing the time required to deploy new testing, demo or PoC environments.

To set up your vSphere lab on AWS with Ravello, create your account here.

Setup Instructions

To set up this lab, first we start off with the following:

  • 1 vCenter 6.0U1 windows server
  • 3 clusters consisting of 2 ESXi hosts each
  • 1 NFS server

In addition to this, we’ll have to deploy the NSX Manager. This can either be deployed as a nested virtual machine or directly on Ravello. In this example, we deployed the NSX Manager as a Ravello VM by extracting the OVF from the OVA file and importing it as a virtual machine.

image03

Of the three vSphere cluster, two will be used as compute workloads and one will be used as a collapsed management and edge cluster. While this is not strictly needed, the setup allows us to test stretching NSX Logical switches and distributed logical routers across layer 3 segments. For the installation of ESXi you can refer to how to setup ESXi hosts on AWS and Google Cloud with Ravello. In addition, your vSphere clusters should be configured with a distributed switch, since the standard vSwitch doesn’t have the features required for NSX.

Each host in the compute cluster has the following specs:

  • 2 vCPU
  • 8 GB memory
  • 3 NIC (1 Management, 1NFS, 1 vTEP, each on a separate dvSwitch)
  • 1 20GB disk for the OS installation

The hosts in the management cluster have the following specs:

  • 4 vCPU
  • 20 GB memory
  • 4 NIC (1 Management, 1NFS, 1 vTEP, 1 transit, each on a separate dvSwitch)
  • 1 20GB disk for the OS installation

The reason for the increased size on the management cluster is due to the deployment of our NSX controllers, edge services gateways and management virtual machines.

After publishing our labs and installing the the base vSphere setup (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of NSX.

The installation of the NSX Manager is actually quite simple. After deploying the virtual appliance, it will not be reachable through the web interface yet, because no IP address has been set. To resolve this, we can log in to the console with the username admin and the password default. After logging into the console, we have to run the command enable, which will ask for your enable password (also set to default) and then run setup. This will set the initial configuration allowing you to access the system through the web interface.

After configuring the manager, open a web browser and connect to https://ip-of-your-manager. After logging in, you should see the initial configuration screen:

image08

Start off with “manage appliance settings” and confirm that all settings are correct. Of special importance is the ntp server, which is critical to the functionality of NSX and should be the same on both vCenter, ESXi and the NSX Manager.

After configuring the appliance, we can start with the vCenter registration. Either open “Manage vCenter registration” from the main screen, or from the configuration page under Components ->NSX Manager service. Start with the lookup service, which should point to your vCenter server. If you are running vCenter 6 or higher, use 443 for the port, otherwise use 7444. For the credentials, you should use an administrator account on your vCenter server.

In the vCenter server configuration, point it to the same vCenter as used for the inventory service.

image06

In case the registration doesn’t work, wait a few minutes. The initial boot of the services can take up to 10 minutes, so the services might not have started yet. You can check this by opening “view summary” on the main page.

If the status doesn’t say connected after registration, click on the circular icon right to the status. The synchronization works automatically but we can speed up the initial synchronization by forcing it manually.

After the initial setup, log out of the vSphere web client and log in again. You should see a new icon called “networking and security”.

image00

This gives you an environment preconfigured for NSX but without the controllers or NSX drivers actually installed in the hypervisors. This allows you to quickly provision a study or lab environment allowing people to configure NSX themselves without having to spend time on deploying appliances or recreating ESXi hosts and vCenter servers. We’ll handle the preparation of the clusters in the next chapter, so if you want to create a fully functional NSX environment and blueprint is, read on.

Cluster Preparation

First, we’ll deploy a controller. Go to “Networking and security”, open “Installation” and select the “Management” tab. At the bottom, you should see a plus icon which will deploy a controller.

image07

Select the datacenter to deploy in, select your cluster and datastore and optionally a specific host and folder. Connect your controller to the same network as your vCenter server and NSX Manager and select an IP pool. Since we haven’t created an IP pool yet, we can do that now. Click on the green plus icon above the IP pool list and enter your network configuration. This IP pool will automatically provision static IP adresses to your controllers.

image05

In a production environment, you should run a minimum of 3 controllers (and always an odd number), but since this a lab environment 1 controller will suffice. If you would like, you could deploy 3 controllers by repeating these steps and reusing the IP pool created earlier.

After deploying a controller, move to the “Host Preparation” tab. Click the “install” link next to your cluster, and after a few minutes the status should show “Installed”. Repeat this step for every cluster you want to configure. After the NSX drivers have been installed on your cluster hosts, click the “Configure” in the VXLAN column link for each cluster. Select the distributed vSwitch you’ve provisioned for your VTEP network and an IP pool. Since we haven’t created an IP pool for VTEP yet, we’ll create one by selecting “New IP Pool”. Create this IP pool in the same way as we previously did for the Controller network. Leave the rest of the settings default.

image01

After a few minutes,your VTEP interfaces should have been created which you can also see in the networking configuration of the ESXi host. A new vmkernel port has been created with an IP address for the IP pool. The TCP/IP stack will also be set to “vxlan” as opposed to the default.

image02

After configuring VXLAN on each cluster, we can move on to the VXLAN configuration. Open the “logical network preparation” tab and edit the segment ID & multicast Addresses allocation. The Segment ID configures the range of VXLAN network ID’s (also known as VNI) that NSX is allowed to use. This is mainly of importance if you run multiple VXLAN implementations in the same physical underlay. While this is not likely in a Ravello lab environment we’re still required to configure this.

image04

The multicast addresses are mainly used when NSX is set to use multicast or hybrid mode, and it’s not required to configure this.

The last step required is to configure at least one transport zone. Open the “Transport zones” tab and click the Plus icon to create a new one. Enter a name, select “Unicast” for replication mode and select the clusters that will be part of the transport zone. If you wish to stretch logical networks or distributed logical routers across clusters, select all clusters in your datacenter for this transport zone. If you wish to restrict logical networks or distributed logical routers to specific clusters (for example, your edge network) select only the clusters that should have access to these networks.

After creating a transport zone, you should have a fully functional NSX environment and you can start creating logical switches, distributed routers, edges, distributed firewalls and use any feature available to you in NSX.

Saving your environment as a Blueprint

Once you have installed your NSX environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

The post Install and run VMware NSX 6.2 for Sales demo, POC and training labs on AWS and Google Cloud appeared first on The Ravello Blog.

How to Run Ixia BreakingPoint in AWS for Testing, Demos, and Training

$
0
0

ixia

Author:
George Zecheru
George Zecheru is a Senior Product Manager at Ixia responsible for the Applications & Security portfolio. The owner of patent, George has over 13 years experience in Telecommunications industry.

Once upon a time all you needed to protect your network was a simple firewall. As the Internet adoption increased, the protection provided by firewalls was soon discovered to be inadequate to respond to the increased sophistication of today’s threats. Security vendors have responded with improved protection mechanisms pushing the inspection all the way up to the “content” (application layer). Today’s NGN Firewalls are equipped with the intelligence to detect and prevent intrusion attempts, identify malicious files, applications, users and devices.

Ixia’s BreakingPoint is industry’s leading application and security test solution used to validate the stability, performance and security of the new generation content-aware devices including NGN Firewalls, Web Application Firewalls, IDS/IPS, DLP, lawful intercept systems, URL Filtering, Anti-Spam, anti-DDoS, Application Delivery Controllers and WAN accelerators.

BreakingPoint solution recreates every aspect of a realistic network including scale and content. Ixia’s Global Application and Threat Intelligence (ATI) program fuels BreakingPoint with the intelligence required in simulating realistic traffic conditions and relevant attacks. All this intelligence is consolidated into a large database of applications and various attacks (exploits, malware botnets and DoS/DDoS).

Ravello’s networking overlay makes it possible to create full-featured network & security labs on the public cloud. With a clean Layer 2 networking access, enterprises, ISVs, their resellers, have adopted Ravello for a variety of use-cases – network modeling, development-testing, training, sales demos, PoCs, cyber ranges, security sandbox environments to name a few.

This blog covers the configuration steps required to setup BreakingPoint VE on Ravello’s software defined overlay and complement your existing network security labs allowing you to recreate.

Using Ixia’s BreakingPoint VE (Virtual Edition) on Ravello you can:

  1. Conduct enticing demos by recreating every aspect of a realistic network
  2. Understand your network better and how it works
  3. Validate your network security architecture
  4. Train your customers and strengthen the skills of your security professionals
  5. Improve your operational readiness for refuting security attacks

Environment Setup

  1. Deploy BreakingPoint VE on your local VMWare ESXi setup
  2. Use Ravello’s Import Tool to upload your VMs directly from VMware ESXi setup
  3. Verify and adjust the VM settings
  4. Publish your setup to AWS or Google cloud

1. Deploy BreakingPoint on your local hypervisor

BreakingPoint VE 3.5 and earlier version relies on the hypervisor’s API to deploy the line cards. Consequently before you deploy BreakingPoint VE setup on Ravello you will need to deploy it first on a local hypervisor – VMWare ESXi or KVM.

The following document provides instructions to install BreakingPoint VE on your local hypervisor. You can download the Ixia OVA file (for VMware) and the installation guide from Ixia’s strikecenter portal.

BreakingPoint allows you to use a system controller with up to 12 line cards, and each line card can be configured with up to 8 traffic interfaces (test interfaces).

My example uses a setup consisting of a single line card with 2 traffic interfaces. If you need more line cards it is important to have your entire setup built before you upload the corresponding VMs to Ravello’s library.

Important:
In your local setup, BreakingPoint will use DHCP to acquire IP addresses for the management interfaces of system controller and the line cards. Once you upload the VMs to Ravello’s library you must configure the management interfaces to match the IPs assigned to BPS VE virtual machines on your local setup. This step must be done before you start your VMs. In the event of an IP mismatch, the controller will fail to discover the line cards. Assigning the IP address you want in Ravello is straightforward – just use “IP configuration = DHCP” and type the desired IP address to “Reserve IP” field.

image00

2. Use Ravello VM Import Tool to upload your BreakingPoint VE VMs

Ravello VM Import Tool provides a simple method to upload your VMs to Ravello’s library by importing the images directly from your vCenter or vSphere setup. Here is a quick how to reference to use VM Import tool.

image02

3. Verify and adjust VM settings

In this part you will need to configure the VMs to match the network configuration from your local setup and ensure each VM has the right CPU, RAM, NIC driver.

Settings Validation

  1. First verification step prompts you to verify the general settings (VM name, VM description, host name)

    VM Names:
    In my setup I used BPS-WebUI for the system controller and bpsLC for my line card VM

    VM Description:
    I added the “BreakingPoint Firmware Version”

    image01

  2. Second step prompts you to verify the System Settings

    Assign 4 vCPUs and 8 GB of RAM for each VM.

    image04

  3. Third step prompts you to verify the Disk

    There are no changes required but verify the settings are as shown below

    image03

  4. The third step prompts you to verify the Network

    The BreakingPoint system controller has two management interfaces:

    • eth0 – provides access to the Web User Interface and
    • ctrl0 – control interface for managing the communication with the virtual line cards

    The BreakingPoint line card has a single management interfaces (eth0) and allows a minimum of 2 traffic interfaces (test interfaces) and a maximum of 8.

    • eth0 – provides access to the Web User Interface

    Verify all NICs use VMXNet3 as a Device.

    As mentioned in step 1, it is important to configure each management interface with same IPs as assigned during installation on your local setup.

    Virtual Machine Interface IP Address VLAN
    System Controller ctrl0 192.168.109.199 1
    eth0 192.168.109.200 1
    Line Card eth0 192.168.109.202 1

image06

The line card includes at least 3 NICs – one for management and two for traffic. The first interface on your local VMWare setup (eth0) is the designated management interface. Please note that the import tool may reverse the order of NICs and it is important to assign the management address to the right interface. Assigning the management IP address to an incorrect NIC will break the communication with the system controller and make your line card undiscoverable. In my setup, the management interface was displayed as the second NIC.

Below is the configuration for each one of the NICs associated with my BreakingPoint VE line card - the management interface has the IP address 192.168.109.202 reserved through DHCP and uses same VLAN tag “1”. For the traffic interfaces I used VLAN 200 and disabled the DHCP service by using a static IP address.

image05

With the settings validated and adjusted per above instructions you can now create your application by adding the BreakingPoint System Controller VM and the BreakingPoint Line Card VM. To complete my setup I added a Windows VM to use it as a local hop to access the BreakingPoint user interface. An overview of my network setup is captured in the following snapshot.

4. Publish your application to the cloud of your choice

image07

Conclusion

Ravello’s Network Smart Lab provides an easy way to use Ixia Breaking Point Virtual Edition to test NGN Firewalls, Web Application Firewalls, IDS/IPS, DLP, lawful intercept systems, URL Filtering, Anti-Spam, anti-DDoS, Application Delivery Controllers and WAN accelerators without needing any hardware. Interested in trying out – just open a Ravello account, follow the instructions in this article.

About Ixia

Ixia provides application performance and security resilience solutions to validate, secure, and optimize businesses’ physical and virtual networks. Enterprises, service providers, network equipment manufacturers, and governments worldwide rely on Ixia’s solutions to deploy new technologies and achieve efficient, secure, ongoing operation of their networks. Ixia's powerful and versatile solutions, expert global support, and professional services equip organizations to exceed customer expectations and achieve better business outcomes. Learn more about Ixia’s story!

The post How to Run Ixia BreakingPoint in AWS for Testing, Demos, and Training appeared first on The Ravello Blog.

Viewing all 333 articles
Browse latest View live