Quantcast
Channel: The Ravello Blog
Viewing all 333 articles
Browse latest View live

Introducing Repo: A community repository for infrastructure blueprints

$
0
0

ravello-repo-logo

We are very pleased to announce the availability of Ravello Repo. Repo is a library of public blueprints shared by experts in the infrastructure community (individuals and companies), for the benefit of the rest of the community. These blueprints represent multi-VM snapshots of entire application environments with networking and storage. Anybody can copy a blueprint from Repo into their private library and spin up an instance of the application environment in Ravello (on AWS or Google cloud) with one click.

Ravello Repo

Repo use-case #1: Learning labs for everyone

Repo currently has blueprints of OpenStack labs, Puppet and Ansible testing labs, Linux version migration testing labs, Arista Networks testing labs and more. Anyone who is interested in learning about these subjects can use their existing Ravello account (or open a new Ravello account), and within 5 minutes have a full scale lab to study, practice, customize and use. Say you are studying for your VMware VCDX exams. Or you are practicing for your Cisco CCIE certification. You can use these blueprints as a starting point for a live environment or build and share your own blueprints. The lab environment runs in the public cloud (on Ravello on AWS or Google cloud) and users do not need any local hardware. Users can also edit these lab blueprints and contribute them back to the community via Repo.

Repo use-case #2: Simulating Data center labs for research

Infrastructure experts often need custom the data center labs for research. For example, say you need to see how OpenStack, Docker containers and Mesosphere and next generation firewall technology from Check Point or Palo Alto Networks could work together. With Repo, you can start with a full blown OpenStack environment and add and configure other components as required. Once everything is working well, document your findings and share it with the community.

Repo use-case #3: Labs for software or virtual appliance evaluations

Typically, ISVs have technical marketing and systems engineering teams build out reference architectures and implementation guides for their software. The intent is to help users set up the ISVs software according to the ISVs recommended best practices. For today’s users, that simply isn’t enough because:

  • they lack the servers and networking hardware to build out lab environments per ISV specifications.
  • they find that building the environments from scratch is difficult and error prone.
  • it’s often difficult to get online help for the user’s specific configuration and setup.

With Repo, ISVs or their users can share complete blueprints with the community. New or existing users can spin up entire lab environments without any hardware. This speeds up the evaluation process for users and the sales cycles for the ISVs. Users can also customize their environments and easily share it with the ISV or the broader community to assist in case of problems.

Repo doesn’t cost anything

There is no charge to copy a blueprint from Repo or to post a blueprint to repo. Once a user copies a blueprint from Repo into their own Ravello account, they will be charged a nominal storage cost (typically a few cents per month) to store the blueprint in their account. They only pay for compute and volume storage when they run an application instance from the blueprint in their Ravello account.

We’d love to hear your feedback - once you’ve checked out the blueprints on Repo, let us know what you think and what else you’d like to see there - via twitter @ravellosystems or in the comments section below.

The post Introducing Repo: A community repository for infrastructure blueprints appeared first on The Ravello Blog.


Ravello’s Free Lab Service for all 2015 vExperts

$
0
0

vExperts

Dear vExperts,

You are the pioneers of learning and sharing your knowledge to guide the entire VMware community. In order to support and accelerate innovation in the virtualization community, Ravello is excited to announce a free lab service for all 2015 vExperts. The estimated value of this cloud-based lab is $2500 per vExpert per year - and Ravello is picking up the tab in full for this party. In case you were wondering why its for current vExperts only, the reason is simple - we are a relatively young company with limited funding. But we invite VMware and the partners in its ecosystem to join us and contribute towards making free VMware labs available to more vExperts.

What is Ravello’s Free Lab Service for vExperts?

We hope our free lab service will provide you with additional resources, flexibility and agility to achieve your goals. Our mission is to enable you with smart labs to test new virtualization products and features, and collaborate and share complex designs, architectures and best practices - through which we hope that the entire VMware community can benefit. You can learn more and activate your free lab service here. In a nutshell:

  1. Starting today, each 2015 vExpert gets 1,000 free CPU hours per month on Ravello (with AWS/Google Cloud capacity included) for personal or home lab use. You can run full fledged VMware labs with vCenter, multiple ESXi nodes and other VMware and partner products such as VSAN, NSX, Veeam etc in your Ravello lab.
  2. For the next one year, each month, your Ravello account will be topped up with exactly 1,000 free CPU hours in a “use it or lose it” model -the hours don’t rollover so we encourage you to make the most of your free lab each month.

 

It’s been an exciting few months at Ravello

We’ve been keeping busy and having fun the last few months. Here’s what we have been up to:

  • Running VMware workloads on AWS/Google: Our nested hypervisor, HVX, is still the only way to run VMware workloads unmodified on AWS/Google Cloud (running as a vmdk with all its VM drivers, networking etc intact). We are helping lots of customers who have massive VMware environments but want to clone their dev/test, CI, sales demos and PoC environments in the public cloud for agility & cost reasons.
  • Running the VMware hypervisor itself - ESXi on AWS/Google: We literally took things to the next level by enabling hardware extensions such as Intel VT and AMD SVM in the cloud which allows third-party hypervisors such as ESXi and KVM to use public clouds like AWS and Google just like hardware. This is also an industry-first and currently the only way to run ESXi on AWS (join our beta). We got some fantastic feedback from the community and we humbly thank them for their support & guidance.
  • Giving back to the virtual community: We started with a pilot program for vExperts a few months ago and now we are officially launching Ravello’s free lab service for all 2015 vExperts, where the underlying cloud capacity from AWS/Google is fully funded by Ravello. I’m personally super excited about this and look forward to working with you and hearing from you. We also just launched Ravello Repo a place to share infrastructure blueprints built by the community, for the community. Anybody can spin up their own copy of the application environment in their own Ravello lab with one click using these blueprints. Repo already has OpenStack, Arista and ESXi Autolab blueprints available for free download and we are eager to see what else you folks will come up with!

Let’s take virtualization to the next frontier.

Cheers,
Team Ravello

PS: On a side note, we’re looking for rockstar TMEs with strong technical backgrounds to join team Ravello - please email us at pm@ravellosystems.com

The post Ravello’s Free Lab Service for all 2015 vExperts appeared first on The Ravello Blog.

Splunk demo, PoC, training environments with L2 networking on Google & AWS

$
0
0

splunk-logo

Splunk is a SIEM market leader with an active ecosystem of resellers, application developers, partners, customers – all of which need a Splunk lab for sales demos, customer PoCs, training and development testing. Ravello's Network & Security SmartLab presents an option to set up Splunk labs on public cloud - Google & AWS at costs starting $0.14/hour.

Splunk – a SIEM market leader

Gartner has identified Splunk as a market leader in Security Information & Event Management (SIEM) segment which has seen a phenomenal growth (16% Y/Y) in recent years. The key reasons for this growth –  an increase in disparate machine data present in enterprises, and recent cyber attacks & data breaches. Enterprises are struggling piece together machine data, logs, events from multiple sources to gain meaningful insights into the state of their systems, network and security. Integrating with multiple third-party technologies, this is where Splunk shines. By analyzing everything from customer clickstreams and transactions to security events and network activity from a wide variety of nodes and network & security appliances, Splunk paints a holistic picture for IT Ops.

image02

Everyone needs Splunk environments

Splunk has a large ecosystem of loyal customers, resellers, partners and application developers. Many network, security, and information system ISVs integrate with Splunk’s solution. Splunkbase – Splunk’s App Repository – reveals 762 specialized applications that cover a wide range of functions ranging from Application Management, IT Ops, Security & Compliance, Business Analytics to Internet of Things. Each of these application developers/ISVs require a fully functional Splunk environment comprising of multiple appliances, LAN hosts, network nodes (log/event sources), complete with Splunk Enterprise & Data Collection Machines for their sales demo and development test environments. Splunk resellers and partners also need environments to deploy multi-tier, multi-node hosts to showcase the power of Splunk in a ‘real-world-like’ setting. Customers looking to purchase SIEM tools also need PoC environments where they can evaluate the capabilities of the Splunk in a production-like environment before making a buying decision.

Where can I setup my Splunk lab for demos, PoCs, training?

ISVs, resellers, enterprises have explored provisioning their data-centers to run these transient workloads for sales demo, PoC, training, upgrade & development test environments, and have experienced a sticker shock – it is expensive! In addition, it takes weeks to months to procure, provision the hardware, and get the environment running, and then there are opportunity costs associated when the environment is not being used.

Public clouds, such as AWS and Google are ideal for such transient workloads – providing the flexibility to move to a usage-based pricing to avoid these opportunity costs. Splunk has an AMI (Amazon Machine Image) that allows it to run on AWS. And while it is excellent choice for ‘cloud native’ deployments, it doesn’t lend itself very well to mocking-up a production datacenter environment –  a requirement for Splunk demos, customer PoCs, application development & testing use-cases. AWS networking limitations (e.g. lack of support for Layer 2 networking, multicast and broadcast etc.) make it impossible to mirror data-center environments on public cloud natively.

A nested virtualization platform with software defined networking overlay – such as Ravello – brings together the financial benefits of moving to cloud while avoiding technological limitations. Running workloads on Ravello Network & Security SmartLab brings all the benefits of running in datacenter  – one can use the same VMware and KVM VMs with networking interconnect. And, since Ravello is an overlay cloud running on top of AWS & Google, one also reaps the economic and elastic benefits of Tier 1 public cloud. In essence, using Ravello, Splunk and its ecosystem of application developers/ISVs and customers can run sales demos, PoCs, training environments in datacenter-like environments without investing in hardware resources.

Steps to create Splunk environment on Ravello

I used 3 VMs to create a representative Splunk environment on Ravello – first Windows 2012 to install Splunk Enterprise (indexer), second Windows 2012 to install Splunk forwarder for data collection, and a third VMware Data Collection Node (DCN) node.

Uploading the 3 VMs to my Ravello Library using Ravello Import Tool was simple.

Ravello VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from vSphere/vCenter to uploading OVFs or VMDKs or QCOW or ISOs individually. I chose to upload my Windows VMs and DCN as an OVF.  image07
Verifying settings
1. Verification started by asking for a VM name for the Windows VMs image08
2. Clicking ‘Next’, I validated the amount of resources (VCPUs and Memory) that I wanted my vSRX to run on. image04
3. Clicking ‘Next’, I was taken to the Disk tab. It was already pre-populated with the right disk-size and controller.  image04
4. Next I verified network interface for the Windows 2012 Server. I chose it to have a DHCP address. Ravello’s networking overlay provides an inbuilt DHCP server.  image01
5. Clicking ‘Next’, I was taken to the Services tab. Ravello’s network overlay comes with a built-in firewall that fences the application running inside. Creating “Services” opens ports for external access. Here, I created “Services” on ports 3389 and 8000 to open ports for RDP and Web access to Splunk web interface image09
6. I went through the steps 1-6 for the other VMs image03

 

Publishing the environment to AWS
1. With my application canvas complete, I clicked ‘Publish’ to run it on AWS. I was presented with a choice of AWS Regions to publish it in, and I chose AWS Virginia. My environment took roughly 5 minutes to come alive.  image05
2. Once my VMs were published, I installed Splunk Enterprise on first Windows 2012 and Splunk Forwarder on second Windows 2012. Upon installation, I could login to the Splunk interface to be able to configure my data sources and get Splunk forwarder to send data to Splunk indexer  image00
3. Once Splunk had finished indexing, I was able to see dashboards and execute searches.  image10

Conclusion

Ravello’s Network & Security SmartLab offers an unique and simple way to create data center representative Splunk environments (without hardware investments) on AWS & Google. Just sign up for a free Ravello trial, and drop us a line. Since we have gone through the setup recently, we will be glad to help you create your own Splunk lab on Ravello.

The post Splunk demo, PoC, training environments with L2 networking on Google & AWS appeared first on The Ravello Blog.

OpenStack Revisited – Kilo Ravello blueprint to build lab environments on AWS and Google Cloud

$
0
0

OpenStack Cloud Software

Author:
Michael J. Clarkson Jr.
President at Flakjacket Inc., Michael is Red Hat Certified Architect Level II (E,I,X,VA,SA-OSP,DS,A), Cloudera Certified Administrator Apache Hadoop

OpenStack Kilo Blueprint

Now that Kilo has had a bit of soak time and with the next release of Red Hat OpenStack Platform to be based on it I thought it time to revisit OpenStack. Using the same methods as the Juno installation from my previous blog entry, I set up Kilo running on CentOS 7 using the RDO Packstack based release. The blueprint is now available on Ravello Repo, ready for you to kick the tires. The answers file lives in /root/answers.txt on the controller node. Copy the blueprint to your account and go nuts. The VMs have cloud-init so you will need your SSH keypair. The default user for SSH with the keypair is centos. Password for the root user and the OpenStack users admin and demo is ravellosystems. Once the instance is deployed the Horizon UI is available at https://PUBLIC.IP.OF.CONTROLLER from any modern browser. Just accept the self signed certificate at the warning screen.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

What’s new in Kilo?

There are a lot of improvements and bugfixes in existing services as well as some new projects beginning to see tech preview adoption. Some of the high points are:

  • Nova performance optimization takes better advantage of the hypervisor’s tuning options.
  • Better support for hypervisors including VMWare, Hyper-V, and of course KVM.
  • NFV features including CPU pinning for VMs, large page support, and NUMA scheduling.
  • Flavor support for Ironic.
  • Support for Federated authentication via Web Single-Sign-On.
  • Trove DBaaS resizing support.
  • Tighter Ceilometer integration.
  • Improved VXLAN and GRE support in Neutron.
  • Subnet allocation in Neutron for better control of which projects are on which subnet/vlan.
  • Tons of new Neutron ML2 plugins.
  • Tighter integration with Ceph and Gluster for backending Cinder and Glance.
  • Better support for Ceph as a full replacement for Swift.
  • Improved Heat functionality including nested stacks.
  • Project Sahara now has full support for Hadoop CDH, Spark, Storm, MapR, HBase, and Zookeeper.
  • Many, many more.

Here are the release notes.

Ravello Repo

What is this Ravello Repo I mentioned earlier? As part of the rollout of our free service for current North American RHCEs, we created a centralized repository on which Ravello users can share the amazing blueprints they create with others in the community. The success of the open source model is proof that sharing is caring and on Repo we continue in that spirit. The Repo is available to any Ravello user and sharing is encouraged. All of the blueprints I’ve referenced on previous blog entries are there along with labs for vSphere, Arista Networks, and Mirantis OpenStack. Come join the party!

Free Service for North American RHCEs

Do you have a current RHCE? Are you from somewhere in North America? We have a free service for you to reward your hard work. Free for personal use you get 1000 vCPU hours every month. This is a use it or lose it service, but you can use the hours as you see fit (as long as you don’t violate our Terms of Service). Sign up here.

The post OpenStack Revisited – Kilo Ravello blueprint to build lab environments on AWS and Google Cloud appeared first on The Ravello Blog.

Skyhigh Networks – On-demand customer PoC environments on AWS & Google Cloud

$
0
0

Skyhigh Networks

Author:
Dr. Nate Brady
Systems Engineering Manager at Skyhigh Networks
Nate is a Systems Engineering Manager at Skyhigh Networks managing a growing team of SEs based out of US. Nate is an expert in Networks, Security and Risk Management domains

Skyhigh Networks

Skyhigh Networks is the leading Cloud Access Security Broker (CASB) facilitating secure adoption of cloud-based services. Our solution is two fold: Skyhigh for Shadow IT enables enterprises to detect over 13,000 cloud services that employees may be using and complement usage statistics with a 50-point risk assessment for each service as well as a full workup on usage, anomalous usage, and integration with existing perimeter devices to block access or warn users of impending danger. Skyhigh for Sanctioned IT facilitates cloud adoption by extending traditional controls, such as sharing policies, audit logging, and data loss prevention (DLP) to common cloud services such as Salesforce.com, Box, Office 365 and many others. Our frictionless deployment model has helped us to gain a strong customer base comprising of over 300 enterprises many of which appear on the Fortune 500 list.

Ravello to the rescue: SE Training and Customer Mock-Ups

As a SE manager at a company growing as fast as Skyhigh, I needed a way to let new systems engineers immerse themselves both in Skyhigh’s own technologies as well as those commonly deployed at our customers. For example, Skyhigh for Shadow IT integrates with common proxy and firewall technologies to add intelligence about cloud service risk to existing policies. Additionally, Skyhigh for Sanctioned IT allows customers to extend the reach of common DLP systems, such as Symantec and McAfee, to the cloud. While these integrations make life easy for our customers, it means that our SEs need to have a working knowledge of many different technologies. This is where Ravello has been our savior.

Using Ravello, we can build complex environments either from templates or entirely from scratch. This means that new SEs can quickly come up to speed by deploying use case specific training templates and seasoned SEs can create mock-ups of customer environment in almost no time.

With Ravello, we are able to provide new SEs with a “lab in the cloud” on their very first day. All they need is a web browser and an Internet connection and they can build anything from a simple proxy server to a complex security infrastructure including firewalls, proxies, DLP and SIEM as well as server and desktop infrastructures.

In some cases, customers ask see our product deployed in very specific circumstances but struggle to provide a dedicated lab environment for us to conduct a proof of concept. Ravello allows us to create these environments quickly and securely on behalf of our customers, saving them time and resources while also helping to shorten the sales cycle significantly.

While the environments vary from one enterprise to another, typical customer deployments feature at least a firewall, proxy, active directory, DLP, and some Windows servers and workstations – adding up to upwards of 14 CPUs and 32GB RAM. Then it all has to be connected in the typical route/switch environment that is familiar to the datacenter but foreign to IaaS providers.

Skyhigh Environment

With Ravello – our engineers can have their own lab in the cloud which allows to them to learn new technologies and mock up customer use cases on without having to contend for resources - in a cost effective manner. This is phenomenal!

Dr. Nate Brady
Manager of Systems Engineering, Skyhigh Networks

Once again, Ravello to the rescue! Not only does Ravello make the creation of these environments a snap, it’s far more cost effective. By running an extra layer of virtualization, Ravello can fit several small virtual machines onto a single large instance.

Skyhigh Networks’ Requirements

To provide a great toolset to our engineering community as well as make life easier for our customers, we had some very special infrastructure needs:

  • No CapEx investments – We experience variable workloads depending the SE and customer. As a SaaS company, we wanted to avoid maintaining a fixed-capacity datacenter and cost-effectively leverage the utility of the cloud.
  • Scale on-demand – To accommodate for spikes demands, we wanted a platform that could scale without impacting the customer’s PoC experience.
  • Zero change deployment – A large number of our technology partners provide VMware appliances. It was extremely important to be able to deploy these systems in the public cloud without any changes.
  • Infrastructure templatization - To facilitate quick development of training and mock-up environments, we needed a way to templatize common environments to be used as a starting point for customization
  • Ease of collaboration - We wanted a platform that made it easy to collaborate with prospects so that they could verify that the mock-up was configured to their specifications.
  • Usage-based costs – To reduce the overall cost of sales, the Skyhigh team was looking for a strictly usage-based pricing model.

We considered several options that partially satisfied some of these requirements. We looked into developing ‘PoC hardware kits’, but quickly moved away from this idea due to logistical challenges associated with shipping hardware to every SE in the company (and potentially some prospects, too!) Next, we evaluated using private cloud providers to host these environments but could not justify the high fixed costs involved. Public clouds were also considered, but the inability to run native virtual appliances, limited access to virtual machine consoles, and a lack of availability of L2 networking features made this unfavorable.

Ravello is really flexible - we can take any of our VMs and run them on public cloud and access them just like we would in our own labs - down to the console access. These capabilities allow us to use our existing knowledge about virtualization platforms like VMWare and apply them to cloud instances rather than relearning a new technology – drastically reducing learning curves for our engineers.

Dr. Nate Brady
Manager of Systems Engineering, Skyhigh Networks

Ravello - a perfect match for Skyhigh’s needs

We tried Ravello for one of the PoC setups, and were very happy that Ravello delivered on all our needs.

Since Ravello runs on AWS & Google cloud, we were able to create PoC environments without investing in hardware or building our own datacenter. Ravello runs on Tier 1 clouds where there is no shortage of capacity, quota or overage concerns – we were able to scale our PoC environments on-demand – spinning as many environments as needed. Further, Ravello’s HVX (high performance nested hypervisor) and networking overlay allowed us to run our existing VMware appliances and VMs without making any modifications – this really helped in reducing the learning curve.

Quick deployment of training and PoC environments is crucial to our sales cycle. Thanks to Ravello’s blueprint feature we were able to ‘templatize’ several environments, and use it as a starting point for customization for PoCs – saving us days of work. With Ravello’s network overlay, we were able to recreate the same network topology as our prospects production environment down to the very subnets and IP addressing. Further, with Ravello’s user-permissions feature we were able to selectively share applications with our prospects and collaborate on developing PoC environments quickly – speeding up the sales process. Finally, with Ravello’s usage based pricing, we were paying only for the duration when our PoC & training environments were up – which significantly helped in containing costs.

Overtime, we have standardized on using Ravello for PoC environments for a vast majority of deployments.

Skyhigh Application on Ravello

Benefits realized with Ravello

I’d like to highlight some of the benefits realized through Skyhigh’s adoption of Ravello as the platform of choice for our customer PoCs

Shortened sales cycle - With an on-demand access to Ravello’s environment with rich networking capabilities, the Skyhigh team does not have to rely on prospect’s environment to showcase its service in action. Eliminating this dependency has helped compress our sales cycle.

Reduced effort for lab environments - With re-usable topologies (blueprints) being used as a starting point for lab environments, we have been able to eliminate many of the repetitive tasks gaining improvement in efficiency for setting up new labs and customer mock-ups.

Effective sales engineer training & on-boarding - Encouraged with the success in using Ravello for training our SEs, we have begun using Ravello to create mock-up environments for customers who request them. Easy access to ‘disposable training labs without penalty’ has enabled our SEs to hone their skills, and has also helped in onboarding new SEs quickly.

The post Skyhigh Networks – On-demand customer PoC environments on AWS & Google Cloud appeared first on The Ravello Blog.

My Virtual Way: Lab to migrate a VM with vMotion (and still without hardware)

$
0
0

VMware Training

The last time I started learning what VMware was all about, I stopped at the high-level theoretical overview of the availability, scalability, management and optimization challenges that VMware technologies help organizations overcome. Having no physical servers at my disposal, the first time I went through the long list of VMware technologies vMotion, High Availability, vFlash and all the others - I didn’t do anything. This time, however, I used my ESXi lab set up on Ravello to try to get something done. The result: I migrated a VM using vMotion from one ESXi host to another.

I won’t bore you with my summary of the study guide I’m using to understand the differences between Fault Tolerance and High Availability or my (hopefully effective) ways to remember what DPM or SIOC stand for (if you’re studying for the VCA-DCV, drop me a line if you care to share notes). Instead I wanted to share what I needed to know to migrate a VM using vMotion and how I did it.

First - while I did learn what vMotion is supposed to do - I had no idea how it is actually done. What needs to be configured where, and how. I started with knowing that I will need a set up that consists of (at least) two ESXi hosts, so that I can migrate a VM from one to another.

My ESXi lab

For this I used my basic lab, consisting of two ESXi nodes, vCenter, an NFS server and a Windows client running my vSphere client (I could have used the vSpere web client, but this was my basic set up so went with that).
esxi-lab
I previously created and saved his whole ESXi lab as a blueprint in my Ravello library, so I didn’t have to upload, install or configure anything. I used the blueprint, and published an application from it - basically running a nested ESXi lab on Google Cloud. A few minutes after I hit publish - I could console into my Windows Client and run vSphere, where I had my two ESXi hosts already configured, as well as several VMs that I created there in the past. One of them - ubuntu cloned VM (not the best name, yes) - was already running.

Poetry in vMotion

Since I didn’t know anything about how to actually use vMotion, I searched for some resources and found a video from VMware that was fairly useful in pointing me to the “migrate” option on the VM. However, when I tried to do that, the vMotion option was greyed out, saying that the host the VM was running on wasn’t enabled for vMotion. With a little digging around and a little help from my friends, I realized that I needed to configure the switch on the hosts to enable vMotion.

As you can see from the following set of screenshots, once I enabled vMotion on the host, I was able to migrate my running VM using vMotion. I celebrated with this song.

VM on ESXi host 1 vm-on-esxi1
Change host migrate-host
New location new-esxi-host
Migrating VM migrating-vm
VM running on ESXi host 2 migrated-vm

Next time - a deep dive into vSphere core components. If you are also working on your VCA or VCP certification and have cool tips, useful guides and especially - if you have ideas for good hands-on exercises or are using Ravello for your VCA, VCP or VCDX study labs - let me know!

The post My Virtual Way: Lab to migrate a VM with vMotion (and still without hardware) appeared first on The Ravello Blog.

How to emulate DCs in public cloud and connect them using Cisco CSR 1000v?

$
0
0

Cisco

Author:
Mirko Grabel
Mirko Grabel is a ‘Kick-Ass’ Technical Marketing Engineer with Cisco, and an active CCIE certified speaker at Cisco Live. He has global technical responsibility for ISR, CSR and ASR product lines.

Cisco’s Cloud Services Router (CSR) running IOS-XE is a very popular network appliance used in a variety of scenarios – as a VPN gateway, as a MPLS termination, to connect DCs, to provide an internet split-out for branches, to connect branches to HQ. Coupled with LISP, CSR can also be used to extend the DC with hybrid cloud infrastructure. There are numerous ‘how-to’ articles on the web that articulate how CSR can be used to connect cloud infrastructure to DC or HQ to secure hub-to-spoke or spoke-to-spoke traffic with DMVPN and IPSEC. To create topologies for these use-cases, however, one requires infrastructure to run the CSR routers and networking interconnect to connect them.

In most organizations, it takes weeks to months to procure and deploy new hardware – servers, racks, switches, and it is expensive. CSR’s Amazon Machine Image (AMI) offers an alternative to try out some of CSR features without having to invest in physical hardware. However, due to networking limitations on AWS (e.g. broadcast and multicast packets are heavily filtered, L2 unavailable), many CSR features are unsupported without tunneling on the AMI (e.g. OSPF, IGMP, PIM, OTV, VxLAN, WCCPv2, MPLS, EoMPLS, VRF, VPLS, HSRP). Further, only one network interface can be configured as DHCP. This presents a difficulty in creating full-featured CSR environments that I need as a CCIE to play with different features, and mock-up PoC environments.

This is where Ravello helps. Ravello is a SaaS solution powered by nested virtualization and software defined networking overlay. Ravello enables networking professionals like me to create full-featured networking labs with a multitude of networking & security of VMware or KVM appliances (including Cisco’s CSR 1000v!) on top of public cloud (AWS or Google Cloud), and benefit from unlimited capacity and usage based pricing. Ravello’s software defined networking overlay exposes a clean L2 network – just like a DC environment – and offers built-in network services such as DNS, DHCP, Firewall, Virtual Switch and Virtual Router – should I need it. Further, it allows me to bring in my own router (CSR 1000v in this case), if I want specialized network functionality as a part of my environment.

Connecting DCs in the Cloud

A little skeptical of the tall claims made by Ravello, I decided to put Ravello’s Network Smart Lab to test. (Data-center functionality – running VMware & KVM VMs with L2 access – at public cloud prices and flexibility just seemed too good to be true!) I embarked on creating a CSR deployment connecting two different data-centers through VPN tunnel on Ravello. To emulate a DC, I added some LAMP servers and pointed CSR to be their Gateway on Ravello. With a click, I made two copies of this setup and associated public IPs to CSR’s external network interfaces. Using Ravello’s ability to run VMs in multiple clouds, I published one of the copies of my setup to run on AWS and another in Google. Once both environments were up and running, I configured each CSR instance in AWS and Google cloud to point to the other’s public IP, and voila – my two DCs running on AWS and Google were securely connected!

Cisco CSR with Ravello on AWS and Google Cloud

The rest of this article details configuration to get my multi-DC environment connected through CSR 1000v on Ravello.

Environment Setup

Getting this environment set up on Ravello involved 4 simple steps –

  • Uploading my CSR 1000v and LAMP servers to Ravello
  • Configuring networking on CSR VM
  • Publishing the environments on AWS and Google
  • Configuring the CSR

1. Uploading VMs

I used the Ravello Import Tool to upload all 3 of my VMs. Ravello’s VM uploader gave me multiple options - ranging from directly uploading my multi-VM environment from vSphere/vCenter to uploading OVFs or VMDKs or QCOW or ISOs individually. I uploaded my CSR1000v as an QCOW image.

Ravello Import Tool

2. Configuring Networking

Upon uploading the CSR 1000v, Ravello asked to confirm the resources (CPU, Memory, Storage) for my VM. Since Ravello had already identified the resources, it was more of a verification exercise.

[column size="1/2"]Cisco CSR App System Tab[/column]
[column size="1/2 last"]Cisco CSR App Disks Tab[/column]

Clicking on the Network tab, I added two network interfaces to the CSR – one each for internal and external networks. I configured a static IPs on the interfaces and chose “VirtIO” as the device type. I also associated ‘Elastic IP’ to the external interface so that I could access it from anywhere.

[column size="1/2"]Cisco CSP App Network External[/column]
[column size="1/2 last"]Cisco CSP App Network Internal[/column]

To enable SSH access to my VMs, I opened port 22 in the “Services” tab.

Cisco CSR App Services Tab

Next, I created an ‘Application’ on Ravello. An ‘Application’ in Ravello’s terms is essentially a multi-VM environment. To create my DC, I dragged and dropped my CSR and LAMP VMs on the application canvas.

CSR App Canvas

Next, I saved my application as a ‘Blueprint’ – which was similar to taking a snapshot of the entire multi-VM setup. Blueprint enabled me to make additional copies of this ‘application’ environment.

3. Publishing environment on Google & AWS

With blueprint of this environment in hand, I was able to create two application copies. I modified the IPs on the second copy so that it didn’t conflict with the first copy. Next, I published an instance of each to run in Google Cloud and AWS respectively. Publishing the application was a piece of cake – a one-click action.

Publish CSR App

4. Configuring CSR

With my DC environments set up in AWS and Google cloud, the next step was to configure the CSR to connect the two environments through a VPN tunnel. Here is how I configured the CSR for this environment using CLI.

Defined the hostname first

hostname CSR1000V-AMAZON

For convenience’s sake if I mistype something, I disabled domain lookup.

no ip domain lookup

To get SSH access, I needed a domain name, so I set one up.

ip domain name ravellosystems.com

Next, I generated SSH key pair

crypto key generate rsa

Then configured a username & password for CSR

username admin privilege 15 password 0 XXXXXX

Next I allowed SSH on the incoming lines

line vty 0 4
login local
transport input ssh

To give out IPs to my client VMs, I also set up a local DHCP Server

ip dhcp excluded-address 10.0.2.0 10.0.2.9

ip dhcp pool DHCP
network 10.0.2.0 255.255.255.0
default-router 10.0.2.1
dns-server 8.8.8.8

And here comes the tricky part – how to get the IPSEC tunnel to fly. Here is the full config required for this. Details for each command can be found in the Cisco config guides.

crypto isakmp policy 1
encr aes
authentication pre-share
group 2
crypto isakmp key PASSWORD address 0.0.0.0        

crypto ipsec transform-set TRANSFORM_SET esp-aes 256 esp-md5-hmac mode tunnel
crypto ipsec profile 1
set transform-set TRANSFORM_SET 

interface Tunnel0
ip address 192.168.255.2 255.255.255.252
tunnel source GigabitEthernet1
tunnel mode ipsec ipv4
! The tunnel destination is the elastic IP of the other side!
tunnel destination 85.190.189.58
tunnel protection ipsec profile 1

Here is the simple IP interface configuration, nothing fancy...

interface GigabitEthernet1
description EXTERNAL
ip address 192.168.0.3 255.255.255.0
negotiation auto

interface GigabitEthernet2
description LAN
ip address 10.0.2.1 255.255.255.0
negotiation auto

My default route points to the internet (required to find the other Elastic IP) and my inter DC traffic (just traffic going to 10.0.1.0/24) points to the other side of the tunnel:

ip route 0.0.0.0 0.0.0.0 192.168.0.1
ip route 10.0.1.0 255.255.255.0 192.168.255.1

The tunnels are on-demand tunnels so they only come up once traffic is present. Also to see some counters increase, I created 2 SLAs – one that works just in the tunnel and one that pings end-to-end from one LAN to the other.

ip sla 1
icmp-echo 10.0.1.1 source-ip 10.0.2.1
frequency 10
ip sla schedule 1 life forever start-time now
ip sla 2
icmp-echo 192.168.255.1 source-ip 192.168.255.2
frequency 10
ip sla schedule 2 life forever start-time now

To check if my CSR sees the right interfaces I have to type this tedious long command. So, I made an alias for it –

alias exec sps show platform software vnic-if interface-mapping

Upon doing a similar configuration on the CSR running in Google Cloud (and pointing it its peer in Amazon), the two DCs were securely connected through a VPN tunnel.

Verifying it works

To verify that this setup works, I typed in sh ip sla stat. Increase in the number of successes for the counters confirmed that my VPN was setup and active (hurrah!).

CSR IP SLA

Conclusion

Ravello’s Network Smart Lab offers an unique and simple way for networking professionals to create full-featured CSR environments for PoCs and network design (without needing hardware investments) on AWS & Google. Drop me a line if you would like to play with my CSR Blueprint setup.

The post How to emulate DCs in public cloud and connect them using Cisco CSR 1000v? appeared first on The Ravello Blog.

Migrating dev & test workloads from VMware to AWS

$
0
0

AWS-ESXi

I was on the phone with Chris Porter today (side note - I was quickly impressed with his knowledge) and he made an interesting point about running dev/test workloads in the cloud. “If my production is on AWS, I’d certainly want to run my dev & test there”, he said. “But if my production is on premises on VMware, I’d like dev/test environments in the cloud that can be turned on and off on demand but the problem is they need to be very very similar to my on prem production deployment” . It sparked an interesting conversation on migrating from VMware to AWS, on cloud migration tools, cloud networking constraints, when to re-architect an application for a full-fledged production application migration process and how to approach the problem when it’s for dev/test only.

I’m a huge advocate of the no-migration approach to moving dev & test workloads to the cloud- in fact I may be guilty as charged for coining the phrase “Migration is for the birds, my VMs are nested” but the fact remains that when you need to move your dev & test workloads to the cloud, you do need to consider the various scenarios and migration options out there. Ravello’s nested virtualization approach puts us in an entirely different category - we are a cloud provider that provides VMware-like or KVM-like environments in AWS/Google Cloud. As a result, we haven’t had too many conversations with customers about some of the migration tools out there such as Racemi, CloudVelox, Hotlink but we do guide customers to use the AWS import utility to convert VMs, re-do their networking and re-think their application architecture if they are migrating their entire production application to AWS. But when they are looking at migrating dev & test workloads to the cloud, we (obviously) strongly recommend Ravello.

To summarize, here is a quick cheat sheet I came up with:

Migrating from VMware  to AWS

If you run your production on premise - say on VMware for example - then you have ample reason to migrate your dev & test to the cloud. The promise of just-in-time environments that can be created and destroyed on demand, coupled with never having capacity constraints (yes, no more QA environment bottlenecks as you get closer to release) is sufficient justification. However the challenge has always been the issue of high fidelity dev & test environments. How do you have confidence in your test results if the environment in the cloud looks very different from your on-premise production environment? This is why increasing number of VMware customers are turning to other VMware based clouds such as vCloud Air, or vCloud partner cloud providers so that they can get similar environments on premise and in the cloud. Some would argue that it’s not the same as having the price, reliability and flexibility of some of the leading clouds in the world such as AWS & Google Cloud. And I would argue that Ravello has stretched the “sameness” frontier by recreating VMware and KVM like environments in AWS & Google by using nested virtualization. Majority of Ravello’s customers such as 888 Holdings are running their production on either VMware or KVM in their private data center and instead of converting their VMs or modifying the networking in their dev/test environments to run them on AWS, they simply “nest” them as-is on Ravello.

If your production and dev/test is already in the cloud - migration is a moot point isn’t it? And if your production is on-premise and dev/test is already in cloud, you seem to be on the right track as long as you haven’t changed your dev & QA environments to “fit” them to the cloud of choice. And finally if your production is in the cloud and for some strange reason your dev/test is on premise, you better have a very good justification because in terms of capacity utilization it’s not ideal to have your prod running 24x7 in the cloud while your dev & test workloads which tend to be more bursty are running in house.

In any case, I'm eager to hear your thoughts on running production on premises on VMware and your dev/test workloads on AWS. How did you approach the problem? And shameless plug..but you do get a 14-day free trial if you'd like to try Ravello for dev/test.

The post Migrating dev & test workloads from VMware to AWS appeared first on The Ravello Blog.


The future of training is here: Self paced learning with just-in-time lab environments

$
0
0

self-pace-training

There is a funny analogy between TV and self paced learning. How many of you are still watching only regular cable - without any DVR, without any on-demand content? Very very few, I bet. And I’m sure most of us have wondered- is this the future of TV? Everything on-demand?

I feel the exact same way about software training courses. No doubt instructor-led and classroom courses have their advantages but self-paced learning is quickly becoming a necessity for every software course today. Before I jump into the reasons, let’s look at this 2 min video by Hacker Academy. Big shoutout to them - they one of the smartest bunch of people I have come across. In this video they show you how their self-paced security training automatically spins up live just-in-time lab environments for students across the globe. The best part is not just the zero touch provisioning but also the fact that labs are fully configured for the associated module and then self-destruct after a set time. Wow. Future of security training indeed.

Hacker Academy walks through student & trainer flows for their online security labs

[video url="https://www.youtube.com/watch?v=O8r4W8zwqjg"]

Top 5 reasons self-paced learning with smart labs are all the rage

  1. Anytime, anywhere learning- duh, no brainer right? Everybody is busier than ever and users are increasingly happier with with self-paced learning that they can consume on their own schedule than having to block their schedule.
  2. Hands-on learning - if I had to choose between self-paced labs without labs and an instructor led course with a lab, I’d definitely choose the latter. I’ve always been a huge fan of learning by doing. But when you give me a lab to learn and play with on my own time - now that’s magic.
  3. Zero touch provisioning of labs - having spoken to plenty of trainers out there I know that provisioning labs for a training course is nothing short of an expedition each time. Even with lots of automation there are all sorts of variables due to the type of hardware available, the number of students attending etc. This is why having one API call to provision 100 isolated, pre-configured labs makes all the difference from a trainer’s perspective
  4. Just in time labs that self-destruct - in an ideal world, we’d all have infinite budgets and could leave different lab scenarios running all the time for any number of students. Well, reality bites and we all come to appreciate that labs that are spun up on-demand and self-destruct after 2 hours are easy on the budget making for an affordable solution that can actually be rolled out in the real world
  5. No scale limits - whether 1 student takes the course on a certain day or 100. With cloud-based smart labs there would be no impact whatsoever because the infrastructure is rented on-demand from AWS/Google and you only pay for capacity that’s used. So if your software just got mentioned by Oprah and everybody and their mother decided to learn about it on the same day -you’d be all set :)

The post The future of training is here: Self paced learning with just-in-time lab environments appeared first on The Ravello Blog.

How to setup DHCP for 2nd level guests running on ESXi in Ravello

$
0
0

AWS-ESXi

As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus. These extensions, in essence, allow running other hypervisors such as KVM or VMWare’s ESXi on top of Ravello. In this blog I’m going to focus on using DHCP for the 2nd level guests running on ESXi. This blog is optional in case you do not want to use only static IPs for 2nd level guests.

Overview

This article describes how to use DHCP for the 2nd level guests running on ESXi in Ravello. In Ravello, DHCP is not available for 2nd level guests by default. The reason for that is that Ravello system is totally unaware of those 2nd level guests and those guests cannot reach Ravello’s built-in DHCP server by broadcasting DHCP DISCOVER packets. DHCP requests from i 2nd level guests, will not be answered and therefore the guest OS will not receive a DHCP address. In order to use DHCP in the 2nd level guests, you will need to:

  1. Define the networking in your Ravello Application and vSphere environment for supporting another DHCP server.
  2. Install and configure your own DHCP server VM as a 2nd level guest to service the other 2nd level VMs.

There are few ways how to do those. This blog describes how to do both the easiest way.

Defining the networking in Ravello to support another DHCP server

There are two important factors that need to be in mind when defining the networking:

  1. The new DHCP server should respond only to the the 2nd level guests.
  2. The 2nd level guests should get responses only from the new DHCP server.

Here is an example how to do so:

  1. For each ESXi node in your Ravello Application, add (at least) one NIC dedicated to a separate network that will be used only by those 2nd level guests:
    1. Set a reserved DHCP IP or a static IP for the NIC on the ESXi running guests. Do this for all ESXi running guests in such a way all IPs will be in the same network.
    2. In my example, I have 2 ESXi machines with 2 NICs. The first NIC is using the default Ravello’s network (10.0.x.x). The 2nd guest is using a new network (20.0.x.x) because I have set the reserved DHCP IP of the 2nd NIC to 20.0.0.3 and 20.0.0.4.
    3. You can look at the settings of the NICs here:
      NIC Settings
      NIC Settings
  2. For each ESXi, add another VM network for this NIC:
    vSphere Standard Switch
  3. Set the guest networking to use only this NIC:
    VM Properties

Installing your own DHCP server

You can use any DHCP you prefer. In this blog I will describe how to install ISC DHCP server on a vanilla Ubuntu machine. In addition, due to the network topology I selected in this example, the new DHCP server will be another VM in Ravello’s application connected only to 20.0.x.x network.

  1. Deploy a new Ubuntu machine in your Ravello Application and give it a static IP/reserved DHCP IP in the 20.0.x.x network. In my example I have used reserved DHCP IP 20.0.100.100.
  2. Make sure your repositories are updated sudo apt-get update
  3. Install the ISC DHCP server sudo apt-get install isc-dhcp-server
  4. Enable packet forwarding - sudo vi /etc/sysctl.conf and remove the comment from net.ipv4.ip_forward=1
  5. Reboot the DHCP server machine to enable packet forwarding sudo reboot
  6. Make your DHCP server to act as a DNS as well - sudo apt-get install bind9
  7. Edit the DHCP settings - sudo vi /etc/dhcp/dhcpd.conf and perform the following (note that in my example here the IP of the DHCP server is 20.0.100.100):
    1. subnet 20.0.0.0 netmask 255.255.0.0 {
      range 20.0.1.10 20.0.1.100; # you can set any range in the network as you prefer
      option routers 20.0.100.100;
      }
    2. option domain-name-servers 20.0.100.100;
  8. Restart the DHCP service sudo /etc/init.d/isc-dhcp-server restart

The overall networking of your Ravello application should look something like this:

Ravello Networking

The post How to setup DHCP for 2nd level guests running on ESXi in Ravello appeared first on The Ravello Blog.

My Virtual Way: VCA and VCP certification exam prep: Lab setup – Getting started

$
0
0

VMware Training

There are many great guides out there to study for the the VCA and VCP exam, but many of us don’t have access to a proper lab setup to train on. Especially now with VCP6, often times it’s tough to meet the vSphere 6 requirements in the home lab. I’m not as far along in the certification process just yet, but I already have my own ESXi lab. Instead of purchasing hardware or using a hosting provider, I used set up my ESXi lab on AWS. I put together this quick outline hoping to help fellow exam takers.

If you’ve read my previous posts in this series, first of all thanks! Second of all, as many of you are going through the VCP certification process, you’re sending some great questions about running ESXi on AWS using Ravello. To answer some of these questions, I wanted to share some of the Ravello ESXi Smart Lab basics, so that you can have your lab setup up and running and be on your faster way to the VCP certification.

We recently held a webinar discussing how to build ESXi labs on AWS/ Google Cloud. Enjoy the webcast and slides...

[video url="https://www.youtube.com/watch?v=h9byjFw5omQ"]

[slideshare id=48986275&doc=20150602esxiwebinar-150604114506-lva1-app6891]

So, to the basics:

  1. You will be running ESXi in the public cloud but you will install, configure and use it exactly as if you were running on hardware servers in your home. For all intents and purposes, it’s like you’re running on infinite capacity (vCPU and GB RAM) if you need to (I’m pretty sure you don’t, though).
  2. You need to bring your own license to build your lab setup.
  3. Pricing is usage based. That means that you will be paying for hours of vCPU and GB RAM consumed. This is perfect for your lab, since you don’t need to buy hardware, and you only pay for what you use. You can see a detailed example of pricing for a home lab or go to our calculator to figure out the cost of your setup.
  4. You don’t need cloud credentials to set this up, your dealings are only with Ravello

I think this almost covers the getting started part. Now just for the outline to get your ESXi lab into shape (the links include easy to follow step-by-step guides, including screenshots and all):

  1. How to install ESXi on AWS and Google
  2. How to set up vCenter on AWS and Google
  3. How to create an ESXi cluster on AWS and Google

If there are some other basics I haven’t covered - please feel free to add your comment to the post and we’ll keep the conversation going. Check it out, and don’t hesitate to comment here or in our support forum, or even send us a guest post describing your experience studying for the VCP exam using Ravello.

Good luck to all of us!

The post My Virtual Way: VCA and VCP certification exam prep: Lab setup – Getting started appeared first on The Ravello Blog.

ManageIQ user day and lab at the OpenStack summit

$
0
0

Red Hat logo

Author:
Geert Jansen
The product owner of CloudForms, Geert's areas of expertise include tinkering with all new technologies, developing extensions and modules to contribute on the GitHub.

ManageIQ is an open source cloud management platform. It implements features like chargeback, governance, security policies, orchestration and self-service on top of various virtualization solutions, private clouds, and public clouds. ManageIQ is the open source project on which the commercial Red Hat CloudForms product is built.

I am very proud to announce that our first-ever ManageIQ user day will be held at the OpenStack summit in Vancouver. We are confirmed for Wednesday May 20th, from 1:50 until 6pm, in Room 1 of the East building.

The meeting room has been kindly provided by the OpenStack foundation, and our demo labs are provided by Ravello Systems. Kudos to both organizations for assisting us with this!

Please RSVP here: miq-oss-cday.eventbrite.com

Attendance is free, but since we are hosted at the OpenStack summit you do need a summit pass.

ManageIQ is very relevant to OpenStack as it adds enterprise management capabilities that customers need in order to deploy OpenStack at scale, but that are not provided by OpenStack itself.

Our agenda is as follows:

  • 1:50pm - 2:30pm: We will start off with a quick presentation and demonstration of ManageIQ
  • 2:40 - 5:10pm: Hands-on lab. You will deploy your first ManageIQ installation, connect it to an already installed OpenStack installation running in Ravello, and create some initial management capabilities such as security policy and a self-service catalog.

During the user day, our worldwide experts on ManageIQ and CloudForms will be in the room to help you with the labs and answer any questions.

If you are planning to roll out OpenStack in an enterprise, but have concerns around enterprise manageability, our user day should be a great way to get to know ManageIQ and see if it fits your needs.

The post ManageIQ user day and lab at the OpenStack summit appeared first on The Ravello Blog.

Mirantis OpenStack 6.0 Lab on AWS and Google Cloud

$
0
0

mirantis-logo

Author:
Stacy Véronneau
Lead OpenStack architect with CloudOps, Stacy has extensive experience in Cloud & classic infrastructures, operations management across multiple technologies & IT infrastructure disciplines.

Mirantis OpenStack(MOS) is a hardened OpenStack distribution with the Fuel deployment orchestrator. It uses PXE boot to setup other nodes in the OpenStack Cloud, thus making it very easy to quickly setup a multi-node OpenStack environment. Most traditional Mirantis OpenStack deployments are done on bare metal where there is support for PXE, full access to layer 2 networking, hardware acceleration support, etc. However, that requires capex investment in physical hardware and longer lead time to get everything provisioned. I approached Ravello to leverage their technology to setup Mirantis OpenStack on public cloud, so I could overcome these challenges and it’s been great to partner with them.

With Ravello nested hypervisor platform, I was able to:

I was able setup a 3 Compute node with Cinder, 1 Controller and 1 Zabbix system (monitoring system, as part of Mirantis OpenStack) environment with Fuel deployment on AWS and Google Cloud. This setup also comprised of multiple logical networks - admin, management, fixed/private, public/floating. Check out my detailed, how-to blog post here: http://www.cloudops.com/2015/05/faking-bare-metal-in-the-cloud-with-ravello-systems. I saved my multi-node Mirantis OpenStack application as a blueprint in my private library on Ravello. So, now I can spin up multiple isolated instances of the entire Mirantis OpenStack environments from the blueprint on AWS and Google Cloud, when required - on demand. There is no lead time spent to start from scratch and configure physical hardware, install various software components. These environments can be run for as long as required and shut down to be later spun up for use in the future.

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

[video url="https://www.youtube.com/watch?v=8G6IWd2G9iY"]

[slideshare id=49253439&doc=openstackwebinarv2-150611061047-lva1-app6892]

The post Mirantis OpenStack 6.0 Lab on AWS and Google Cloud appeared first on The Ravello Blog.

VirtualBox to Vagrant to Ravello Smart Labs – Software Development Infrastructure Evolution

$
0
0

Vagrant

VirtualBox and Vagrant are popular tools in the development & test community because of their ease of use and simplicity - but developers want more. This post discusses when to use Virtual Box, how Vagrant fits into the picture and how SmartLabs is the next step in that evolution.

VirtualBox

VirtualBox is popular as a means to create a standard development environment. Using VirtualBox one can package a virtual machine complete with OS, tools, compilers, environment settings etc. that developers can download onto their laptops and start building applications. Thanks to a standardized development environment, software developers don’t run into scenarios that merit saying “...not sure what’s wrong here, but it was working fine on my laptop when I wrote that code!”.

virtualbox

While VirtualBox is great for simple applications that can be deployed on a single machine, it falls short when the application is spread across multiple VMs. VirtualBox alone doesn’t have the capability to define the configuration and networking interconnect for a multi virtual machine environment. This limitation also hampers standardization of test environments by simply deploying VirtualBox. Most test environments need at least two machines - a DUT and a test generator.

Vagrant - builds on VirtualBox

Vagrant compliments VirtualBox in such multi-VM scenarios. Vagrant is a wrapper around virtualization technologies such as VirtualBox (and more recently VMware) that helps configure virtual development environments that are spread across multiple VMs. Vagrant has many features that make it a popular tool in the development community. It is -

  • Simple. To start a multi-VM environment, just type ‘vagrant up’
  • Automates setting up Layer 3 networking between VMs
  • Creates ‘disposable’ development / test environments on servers with ease
  • Centrally controls the configuration for all VMs
  • Enables source-control of environment configuration - Vagrant settings are in a text file
  • Integrates with Chef & Puppet for VM provisioning

Vagrant is great, but developers need more

I get to engage with developers and testers on a daily basis. The overwhelming feedback from developer community is vagrant is great tool and mitigates many a developer pain-points. However, many vagrant aficionados also highlight some shortcomings. They say - Vagrant has the potential to be a truly ‘stellar’ tool if it could also do the following -

  • Snapshot the entire environment along with VMs, their configuration and corresponding network configuration
  • Source control the entire environment (not only the configuration) - without having to use external tools
  • Create exact replicas of production environment, and use for development & testing. No more nasty surprises when code is deployed in production.
  • Spin up as many copies of this environment as they wanted - with just a click. Onboarding a new dev or test resource would become very easy.
  • Create ‘disposable copies’ of this environment on public cloud and pay only for what they use - contain costs
  • Attach a copy of ‘disposable’ production environment to each bug fix, and share the entire environment along with the fix with the testers - no more ‘the-fix-was-working-in-my-dev-environment’ conversations
  • Have access to a clean Layer 2 networking in addition to Layer 3 - build cool application features (e.g. auto-discovery & high-availability protocols) that rely on this

Ravello - a fresh technology perspective on enabling software development life cycle

Ravello’s Smart Labs platform - powered by nested virtualization and software defined networking overlay does all this and much more. In fact, it has the potential to accelerate the entire software development lifecycle.

As an example, early in the product development lifecycle one is typically busy prototyping a concept and needs a flexible environment for the ‘skunkworks’ project. It is difficult to get resources for prototyping at most companies - provisioned resources are already spoken for by the approved projects. In this phase, Ravello’s Smart Labs can help get your prototype lab running quickly on the public cloud without investing in hardware resources - limiting the cost risks associated with prototyping phase.

Once the concept prototype has matured to a planned project, Ravello’s Smart Labs can help the development and test teams to stay in sync and collaborate productively.  Smart Lab serves as an excellent environment for design, development and QA - development team can incrementally build, and pass the developed software complete with corresponding environment to the QA teams for testing. Smart Lab makes it possible to attach an instance of lab environment to each bug/feature. Also, thanks to elasticity of the public cloud - teams can easily accommodate for the burstiness in demand for new dev/test environments closer to the release date.

Before release, one  also needs to validate the functionality of the application in a variety of deployment scenarios. Ravello’s Smart Lab simplifies creating deployment scenarios using ‘disposable labs’. If something goes wrong during the upgrade or deployment testing - one can ‘destroy’ the ‘botched up’ lab, and create a new one with a click. It is easy to resume from the last ‘good-state’ by using a snapshot ‘blue-print’ to spin up the new lab.

Once the software is released, the sales team needs environments for sales demos and customer PoCs. Using Smart Labs, it is easy to create repeatable demos in a cost effective manner. Simply set up demo on Smart Lab, take a snapshot ‘blueprint’ and spin up copies of this environment on AWS or Google every time a sales engineer needs to do a demo. As the opportunity matures, and one needs a PoC environment - the same demo environment can be shared with customer and customized for planned deployment with ease.

To train customers, partners, resellers on the product functionality - copies of training environments can be spun off in AWS & Google cloud at locations closest to the trainees, providing a local user-experience. Also, classroom training leads to burstiness in resource demand that can be easily accommodated using elasticity of public cloud.

Ravello’s Smart Lab also proves to be a great tool to collaborate with customers when troubleshooting issues. Using Ravello, customers can create a high fidelity copies of their production environment that can be used for debugging issues without having to actually provide access to real production environment - mitigating the risks associated.

Ravello & Product Lifecycle

Conclusion

Ravello’s Smart Lab offers the benefits associated with Virtualbox & Vagrant - and more by taking a fresh technology perspective that enriches every stage of the software development lifecycle. Interested in trying Ravello Smart Labs? Just sign up for a free Ravello trial, and/or drop us a line and we help you get started.

The post VirtualBox to Vagrant to Ravello Smart Labs – Software Development Infrastructure Evolution appeared first on The Ravello Blog.

How to use Veeam Cloud Connect for DR: backup and restore with nested ESXi on AWS

$
0
0

veeam-cloud-connect

The topic of Enterprise Disaster Recovery in the "Cloud" often comes up when I am working with Ravello customers. Since Ravello can create high-fidelity on-demand copies of your VMware and KVM environments with complex networking in Amazon AWS and Google Cloud people often ask me about DR testing scenarios. A couple of years ago I was utilizing Veeam to backup my VMware ESXi clusters and recover virtual machines in a few different on-premise Data Centers. I decided to take what I learned then and apply it to some "Cloud" based DR use cases utilize Ravello's service and build a detailed blog. Since Ravello can now run ESXi on the public cloud, I can use AWS or Google as my remote site.  This could be a useful lab deployment for service providers and/or their customers who want to demo, POC or just try out Veeam Cloud Connect for themselves, using ESXi on a remote site (in this case the remote site happens to be AWS or Google Cloud). 

I have some Veeam contacts so I reached to discuss a couple different use cases ideas. I landed on one very interesting concept where I would utilize Veeam's "Cloud Connect" offering.
Today, lots of service providers support "Cloud Connect Gateways" for off-site backups. This is great as it gives customers another copy of their very important virtual workloads. To restore a workload from those back-ups, customers would typically point to an ESXi (or HyperV as the case may be) target either in their private or in a hosted/public cloud running ESXi. We figured it would be an interesting exercise to use those off-site backups and restore the workload on AWS or Google Cloud using Ravello's nested ESXi offering.

Here are some options I see that exist today:
- They get around to repairing their local environment or have another data center and point to the repository at the service provider and start the restore process. This is not ideal in my opinion, as you have to transfer all the data required across the WAN to conduct the restore. You will need a huge pipe for that and it will take a considerable amount of time.

-Complete download of hosted backups is not the only option, many service providers are offering in-place restores from cloud connect backups. They can utilize the service providers "shared" VMware environment to do the restore. Each service provider has their own pricing scheme so please look them up, but they typically have a base subscription fee, may or may not have on-boarding fees and then have usage based charges. Also, they tend to provide additional value added services and guidance to the customer in addition to providing the capacity.

- The customer could request to have a "always ready" virtual environment running and provisioned for them when they declare a disaster. Essentially a secondary Data Center sized for their needs. This would come with a large cost, as it needs to be ready and maintained on a regular basis.

Now back to my use case:
- Utilize the endless on-demand capacity of a public cloud such as AWS or Google and pay only for the capacity that you use and require at the time. There are no on-boarding fees or monthly subscription fees - environments can be created and destroyed entirely on-demand. Seems like a cool proof of concept (note that this is a lab scenario and a technology demonstration at this point)

Since Ravello has recently released support for VMware's ESXi hypervisor, I can take this use case to some pretty cool levels and have an enormous amount of flexible options.

So for this blog and use case we have 2 environments, they will be known as:

1 - "Local Data Center" - That is the one that was running on the "Flex POD" but is now running on Google Cloud. But think of it as my "Local Data Center"

2 - "Cloud Data Center" - This is the one running in Amazon AWS. Think of it as my service provider. It just happens you have full control of the service provider and can spin it up on-demand when you like. Sweet!!!

I may soon expand this blog and use case as Veeam have some other cool options I did not dive into in this blog, keep an eye out for that.

Ok so let's get into the technical stuff and details on the setup...

Below you can see that I have 2 applications deployed inside Ravello. One application is running in Amazon and the other application is running on Google Cloud. (Each Application has 6 VMs)
I will dive into the detailed application configuration below:

app-config

Below is the detailed application view inside Ravello, this application deployed in Google is acting as the "Local Data Center". This would be where my primary "protected" virtual machines are running,
You can see I have a 2 - VMware ESXi Nodes in a cluster utilizing FreeNAS for NFS datastore(s). I also have a VCenter deployed and managing the cluster.
Also 2 virtual machines running windows and "Veeam Backup and Replication Server".

One great thing about Veeam is all the software and features are available with the same package installed. You just configure the component you want to utilize.

In my case, 1 is acting as the primary backup server and the 2nd is acting as a WAN accelerator.
I simply keep a virtual machine image inside my Ravello library where I have the base software installed. I "drag" and "drop" a new one on the canvas to add an additional server and or Veeam component(s).

Veeam-components

Below is the "Cloud Data Center" application version running in Amazon, Again think of this application environment as my DR site. Later you will see I can recover virtual machines into this environment.
It is basically an exact copy. The only difference is I am running "Veeam Cloud Connect" on one of the windows virtual machines. It receives backups from my "Local Data Center". This can be on going or scheduled, as you will see this later.
You will see later we are taking advantage of Veeam's Deduplication and Compression capabilities to reduce the amount of data sent to the "Cloud Connect Machine".

A nice option with this design is I can schedule when my "Backup Copy" runs on my "Local Data Center", meaning I can schedule this to happen in the middle of the night.
Even better I don't have to run my "Cloud Data Center" application all the time. I can simple schedule a separate job to call the Ravello API to start my "Cloud Data Center" just before my replication job runs and shut down after a few hours.

Here you can find some cool examples on how to utilize the API and Python SDK all orchestrated using Jenkins.

It simply comes down to your recovery objectives, in this case I have an entire copy of my virtual environment applied each night, You can also run this 24/7 if you want to have a shorter recovery point objective. RTO / RPO

veeam-cloud-connect

Below on the right - you can see the network configuration for my "Cloud Connect" virtual machine.  We have a local network that we use for application communication.  In the case it is L2 - 10.0.0.0/24
A very important aspect of this approach is making sure when we boot up the "Cloud Connect" machine it is accessible via the same public ip address. (Otherwise the local side would need to be manually configured each time).

For this I am using Ravello's "Elastic IP Address" feature.
This allows me to stop the environment, start it later on and maintain the same public IP address in front of "Cloud Connect".  You could go ahead and register a domain name against this public ip address if you like.

This IP could also move to Google Cloud if you wanted to move your "Cloud Connect" infrastructure to another public cloud or another Amazon region for latency purposes.  Customers also like to take advantage of the best price for their applications; Ravello has an option to take advantage of the best-priced cloud provider at the particular time.

veeam-master-cloud-network

Below on the right - You can see what services I have exposed to via the public ip address.
With Ravello you can either choose to completely "fence" your virtual machines or specify what ports you want to expose.  You can see I have "fenced" the VCenter and ESXi's", the can be access via the other vm's over the private network or console.
In this case I have opened port "6180" for replication traffic and "3389" so I can RDP to the "Cloud Connect" machine.  I use this to configure "Cloud Connect" and I am also running "vCenter Client" on that vm.

Ravello also supports "IP Filtering", this allows me to only let certain source addresses connect to the services or ports I described above.

veeam-ip-filtering

Below on the right - I have a 2nd disk attached to my "Cloud Connect" virtual machine.  This disk is where I store all the backup's coming from "Local Data Center".
You have complete control of the virtual machines configuration, you can also make this bigger later or add more disk, memory, cpu's and so on...

veeam-master-cloud-disk

Here is a detailed view of my application networking; you can see I have many networks.
I have designed the VMWare ESXi cluster in a traditional way, I have separate networks for storage, vmotion, guest networks for example.

vmware-esxi-networking-tab

With Ravello you also have "console" access to view the virtual machine and apply initial configurations options.
Here you can see the VCenter virtual appliance running.

Below you can see one of my ESXi hypervisors up and running.

esxi-hypervisors-running

Here is my FreeNAS Virtual machine.

freenas-vm

OK - so let's dive into the Veeam configuration and backup examples.
Below you can see I am running a backup on my "Local Data Center", I am doing a backup on a Linux virtual machine.

backup-linux-vm

I also have a 2nd backup running, this one happens to be my local domain controller.

local-domain-controller

We are going to let those run and go ahead and configure our "Cloud Connect" virtual machine running in our "Cloud Data Center" on Amazon I described above.
I have RDP'ed into my virtual machine via the "Elastic IP Address" - We will go ahead and configure it.

We can go ahead and add a "Cloud Gateway"; in my example I already have one configured but will review the settings for you.

review-cloud-gateway

When you are finished you will see your "Cloud Gateway", you can also edit these setting later as I am doing in the example.

edit-cloud-gateway-settings

You can see I am running the service on port "6180"

cloud-gateway-service

Here I specify my "Public Elastic IP Address"

cloud-gateway-elastic-ip

cloud-gateway-setting-apply

You will want to go ahead and create a user, "Cloud Connect" supports multi tenancy.  So you can have seperate clients with separate user account, storage and quota's.
I will do this example using the user "kyle", you can see I haven’t been active for 34 days but we can go ahead and sync the changes since the lab successful replication run.
I currently have 3 virtual machines protected.

veeam-multiple-users

Here I configure my user; you can control the lease for example.

veeam-control-lease

Here you can configure the repository, also the user quota.  You can also choose whether you want to utilize WAN acceleration.

wan-acceleration

Here you can see my WAN accelerator virtual machine.  I am RDP'ed into it.  It's just another virtual machine in my "Cloud Data Center".

wan-acceleration-vm-rdp

Ok so now that we have the "Cloud Connect" all configured and running in out "Cloud Data Center" we can go ahead and register it in our "Local Data Center".
I am RDP'ed into my "Local" Veeam virtual machine.  You can see I have already added my "Local" VCenter and ESXi Cluster".  I have 2 ESXi hosts up and running.

two-esxi-hosts

Let's go ahead and register the "Cloud Connect Gateway" - It's acting as a "Your Service Provider" for example.
This is the one we just configured in "Amazon".

cloud-service-provider

Here we specify my "Public Elastic IP Address" - This is my "Cloud Connect" virtual machine running on Amazon.

cloud-connect-aws

Here we can see the "Certificate" and also specify the user we configured earlier.

cloud-connect-certificate

It accepted my user account, you can see I have 100GB quote and am also configured for WAN acceleration.

cloud-connect-user-wan-disk

Ok we are all done; let's move on to the next steps.

cloud-connect-setup-done

You can see our "Cloud Connect" endpoint up and running.

cloud-connect-endpoint

Earlier you saw me running a couple "Local" backups.  These are stored on my "Local" environment.
Next we need to configure a "Backup Copy Job".  This is telling Veeam to take the "Local" backup copy and replicate it to our "Cloud Data Center" Environment.
You can see that you have the option to schedule the job to run at certain times.

cloud-data-center-jobs

Here you click "Add" - you are going to pick from "backups",

pick-from-backup

You can see the existing "Local" backups, I am going to pick my Linux virtual machine.

local-linux-backup

Here I choose my repository - In this case I want to utilize my "Cloud Repository".
You have some retention options for example. - You can also have a look at "Advanced Options".

cloud-repository

I am choosing 7 restore points.  You can also see the quota I still have available.

choose-restore-points

I want to make sure to utilize my WAN accelerator on my "Local Environment".

local-environment-wan

All right now we can see the job running, we are now replicating this virtual machine to our "Cloud Data Center" Environment.
You can see lots of details at the bottom of the running job.

replicating-job-details

Here you can see I am connected to the "Cloud Data Center", I see "kyle" as an active user.  I have 3 protected virtual machines.

active-user-protected-vms

Here I jump back to the "Local Data Center" side.  Have a look at the compression and dedupe, you can see we have processed 29.4GB but only copied 477.3MB over the WAN.
You can see the WAN is the bottleneck, not the virtual infrastructure.

permissions-dedup

Back to the "Cloud Data Center", you can see the job was successful and I now have 4 protected virtual machine in the "Cloud".

job-successful

I am up to 55% of my quote of 100GB.

space-used

Ok so not that we have some "Local" backups and also have replicated a new virtual machine to the "Cloud Data Center".
As you saw above I have 4 vm's protected in the "Cloud Data Center".

First - Let's "Rescan the Repository", this will allow us to pickup the new changes that have been applied to the Repository.

rescan-repository

You can see below, 1 new virtual machine has been added (That is our most recent "backup copy" job).

new-vm-added

Let's get into some "Cloud Data Center" restore options. Go ahead and choose the "Restore" button.
We will start with a "Entire VM Restore".

entire-vm-restore

Here you select "From Backup".

restore-from-backup

You can see all the virtual machines I have available to restore.  Also the amounts of restore options I have available for each virtual machine.

all-restore-options

You can see I picked my Linux virtual machines.  It shows the restore point date available.

linux-restore-point

Here I want to select a new location where I want to restore to my "Cloud Data Center".

restore-location

I choose to restore to "esx01"

restore-esxi-01

I select the "datastore" I want the files to be restore on.

destination-datastore

I have the option to choose the disk type, to save space I always select "thin disk".

thin-disk

Here I select the "virtual network" I want assigned to the virtual machine after it boots up.

virtual-network

Ok let's go ahead and kick off the restore.  We can review all the options we selected.
I have the option to power on after restore, this time I will do it manually via VCenter.

restore-summary

Now looking at VCenter I can see the virtual machines I have started to restore to my "Cloud Data Center".

vcenter-restore

Going back into the Veeam screens, the restore is currently running (you can see lots of data in the log).

veeam-restore-running

All right, we have successfully restored our virtual machine!!!

successful-vm-restore

I have booted up the virtual machine in "Cloud Data Center" VCenter and I am able to use the console to connect to the virtual machine.
The vm is in the exact same state as it was running in my "Local Data Center"

cloud-data-center-vm-state

Now let's go ahead and do an "Instant VM Recovery".  Same thing start with the "Restore" button above.

instant-vm-recovery

Same thing, pick the virtual machine you want to restore.

vm-to-restore

I will pick the "Full" backup I have available.

full-backup

Same as last time, I want to choose to restore to a new location.

restore-to-new-location

I will use "esx01" again.

restore-esxi-01-new

Make the other selections required.

all-selections-required

Pick the datastore attached to the cluster.

cluster-datastore

Ok our "Instant VM Recovery" has started.

instant-vm-recovery-start

Looking inside VCenter we can see the recovery has started.

vcenter-restore-started

So now Veeam is waiting for "user to start migration".

veeam-user-start-migration

I can go ahead and boot up my virtual machine in VCenter.

boot-vcenter

You can see the virtual machine is "Mounted".

vm-mounted

I can choose "Migrate to Production"

migrate-to-production

I am able to choose the options available.

choose-options-available

If you are utilizing separate "Proxy Servers" you can make the appellate selections.

proxy-server-selection

Ok we can finish up the "Recovery".

finish-recovery

You can view the status of the "Migration".

migration-status

migration-status-progress

Ok - SUCCESS!

migration-success

We can get on the "console" of the virtual machine

console-to-vm

Ok, let's go ahead and try out a "File Level Restore" to our "Cloud Datacenter" environment.

file-level-restore

This time we pick a "windows virtual machine" to do a file level restore.

windows-vm

I can pick my restore points available.

pick-restore-points

Veeam shows me my file system and all the files available that can be restored.

veeam-file-system

As you can see we also have some "application" options inside Veeam.
This virtual machine is a domain controller so I can restore "Active Directory Items".

veeam-application-options

This Virtual machine had a D Drive.
You can simply "Copy" and "Paste to local machine" the files you want to recovery.

paste-files-local-machine

We can also do file level restores of Linux file systems.

linux-file-restore

For this option Veeam will boot up a temporary virtual machine in our ESXi cluster.

veeam-temp-machine

Choose your Linux virtual machine

choose-linux-veeam

You can select "Customize" to make the appropriate selections.

customize-veeam

You can see the "VeeamFLR" virtual machine boot up.

veeam-FLR-vm

You can now make the appropriate file level restores you desire.

Now I am going to go ahead and turn off just my ESXi cluster environment, I want to leave the "Cloud Connect" and "WAN Accelerator" online to continue to receive vm replication.

cloud-connect-wan-accelerator

In conclusion, utilizing Ravello’s powerful service and the unique feature of being able to run ESXi hypervisors in the public cloud I can take my disaster recover options to the next level. It’s a great use case for Veeam training and also doing dry runs to ensure your disaster recovery plans are ready for when you really need the most. Not to mention it all on-demand and utilizing public cloud infrastructure,

Go ahead and sign up for the ESXi beta and give it a try for yourself.

Please send us your feedback and if you need any help with your specific use case.

The post How to use Veeam Cloud Connect for DR: backup and restore with nested ESXi on AWS appeared first on The Ravello Blog.


Nested ESXi on AWS – say what!?

$
0
0

AWS-ESXi

I still love this story from Maish Saidel-Keesing where he talks about how everybody vividly remembers the first time they saw vMotion in action and instantly realised that it would change the way they used computers. Back in 2013 when we first unveiled Ravello HVX, Maish said "A few weeks ago - I got that feeling again - but this time not with vMotion but rather with a product that I saw for the first time from Ravello Systems".

I could completely relate to Maish's story because in a previous life, I'd worked at VMware and had experienced first-hand the overwhelming support from the technical community. I'd constantly run out of the "I heart VMware" stickers back then. And when I ran product marketing for VSAN in its early stages (yup, before we went and called it VSAN with a capital V), I had goosebumps realizing that VMware could do for storage, what it did for compute

I Love VMware

In the midst of all that excitement, I stumbled upon Ravello Systems - a company built by the team behind KVM, a company that was yet to launch its first offering and talking about virtualizing the clouds so that VMware workloads could run unmodified on AWS. "Isn't the cloud already virtual? Watch out, that sounds like unicorn and rainbows", said some of my peers. But as soon as I got to know the people at Ravello and understand the technology, I was hooked. And the team's open source background definitely brought another unique perspective on the power of communities. That was more than two years ago. Fast forward to 2015, and we're excited to see tremendous market momentum.

Ravello's most common customers continue to be those who have large VMware deployments, and want to seamlessly use a public cloud such as AWS or Google Cloud, for testing, staging, training, certification and demo labs - without the hassle of cloud migration, without the hassle of converting those vmdks or re-doing their networking. It's a VMware-centric view of the world. No more Hotel California "you can check in but you can never leave" problem with public clouds.

But then along came Red Hat and said they'd like to run multi-node OpenStack labs on Ravello. Until then we were running VMware and KVM workloads on public clouds. But now we'd have to run the KVM hypervisor itself. A hypervisor such as KVM requires hardware extensions such as Intel VT and AMD SVM in order to even boot up, let alone run well. Our engineering team took up the challenge and after months of hard work we were able to run the KVM hypervisor on AWS/Google. It's still the only way to run KVM on AWS and Red Hat is currently running their OpenStack training labs on Ravello.

And by then the VMware community was already asking about nested ESXi in the cloud. So we went ahead and implemented that too. Our Ravello ESXi Smart Lab offering went into beta exactly a month ago.

We are extremely grateful to the leaders in the virtualization community that have been so forthcoming with their feedback and have been graciously sharing their insights. Some of them have even built cool labs doing crazy things like a vMotion from AWS to Google cloud :-) Here are the links to the blogs from some of the best bloggers out there.

In my opinion, this is pretty cool, and it opens the door to a lot of different possibilities: upgrade testing, automation testing, new feature testing, hosted home labs (aka "Lab as a Service"). Lots of folks are interested in using this new Ravello functionality for "Lab as a Service".

Scott LoweScott Lowe
Running vSphere on AWS or GCE

Great nested environment which can be used for testing workflows, updates in case you don’t have spare lab hardware for testing. With no upfront costs it looks like quick setup and spin environments

Vladan Seget Vladan Seget
Run Nested ESXi in AWS or Google with RavelloSystems

Ravello gave me a shot to try it for myself – and during the introductory chat as they were showing me how things worked I thought, hey, what a use case for the new cross vCenter vMotion capabilities in vSphere 6! A lab in Amazon, a lab in Google Cloud, and VMs migrating between them – how cool is that?

Mike Preston Mike Preston
A Google Cloud to Amazon vMotion – The Ravello Way!
VXLAN on Ravello between Google and Amazon EC2

Ravello is getting closer to the dream of abstracting the data center. I think HVX has uses outside of nested labs. As the technology matures, I can see a market for the raw HVX technology. Enterprises needing to rapidly deploy data centers can use HVX as the base and apply their abstracted design upon any bare metal underlay.

Keith  Townsend Keith Townsend
Running vSphere in Amazon or Google Compute
Ravello Systems Nested ESXi First Look

As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

William Lam William Lam
Running Nested ESXi / VSAN Home Lab on Ravello

I suggest to give it a try… it’s something really interesting and could be powerful for build your lab. I hope also that the community can grow around this to improve the public images and public blueprints and maybe also work on the architecture aspects of an application (the visual designer is really useful) to share more experience.

Andrea Mauro Andrea Mauro
Ravello System and its Lab as a Service solution
Ravello – How import an existing VM

The end result pleased me greatly. The interface is very intuitive and perfectly meets the performance needed for a lab environment. In addition, the possibility of using templates (VM library) and blueprints (which allows the deployment of a complete environment) facilitates and speeds the creation / reproduction of new environments.

Tiago Martinez Tiago Martinez
Lab VMware na nuvem (AWS ou Google Cloud)

But back to labs. ESXi in the cloud, you pay by what you use, and you have basically an infinite pool of resources, not your puny little laptop or 5-year-old rack mount space heater. What can you do with it?

John Troyer John Troyer
Cloud vSphere Labs for fun and profit

In general I was really impressed by the ease of use of the interface to Ravello and the functionality it’s cloud platform currently has. Also the performance during the building the vSphere 6 lab was fine.

 bR3Ouh9s_reasonably_smallRobert Verdam
Ravello Systems – Smart Labs

For now I am very enthusiastic about the service (although I still have very little really done with it), especially since I have encountered further nowhere a cloud provider that provides this functionality.

Ronald De Jung  Ronald De Jung
TLAAS (Test Lab as a Service) Voor vSphere

With Nested ESXi support on Ravello it is possible for my AutoLab to run on public cloud. Rather than buying a beefy physical machine to run your vSphere study lab you can rent the capacity from Ravello and just run the lab when you need it.

Alastair Cooke Alastair Cooke
Nested virtualisation becomes nested cloud
Autolab with vSphere 6, Now with Extra Cloud
Autolab 2.6 on Ravello Video

The post Nested ESXi on AWS – say what!? appeared first on The Ravello Blog.

Ravello VM Import Tool – Best Practices and Tips

$
0
0

Ravello logo

Ravello’s VM Import Tool

Ravello offers an import tool that enables organizations to easily upload their VMware and KVM VMs to Ravello’s platform in a variety of ways - directly from vCenter or vSphere, as an OVF, or by uploading disk files and images (ISOs, VMDKs, QCOWs).

Factors affecting upload time

As one may expect, the time taken to upload the VM by the import tool depends on -

  1. Size of VM - larger VMs take longer
  2. Bandwidth available - more bandwidth means faster upload
  3. Link characteristics - lossy links take longer

We recently ran some tests using Ravello’s VM import tool to characterize the upload time taken based on the type of link. We emulated the WAN link using the popular tool - WANem - and uploaded a 78 MB ISO file for our tests.

As expected, the upload time was small for links with higher bandwidths (e.g. OC-9, FDDI etc.) and large for links with lower bandwidths (e.g. T-1, ADSL etc.). The following chart should give one a rough estimate of the time it would take to upload a similar sized VM based on the link available (note the logarithmic scale for the bandwidth axis).

image02

Next, we looked into the impact of link characteristics on the upload time. Keeping the bandwidth constant at 10 Mbps (CAT-3), we uploaded the 78 MB ISO file for our tests at different levels of packet loss. As expected, the upload time increases with the increase in packet loss. At greater than 20% packet loss, the upload time increase exponentially - thanks to multiple re-transmissions. It is interesting to note that with packet losses less than 15%, the VM import tool is able to gracefully recover keeping the upload time fairly constant.

image05

Having difficulty uploading?

If you are facing challenges with Ravello Import Tool that cannot be explained by the bandwidth and link characteristics mentioned above, here are some things to look into -

Is system clock of machine running Ravello VM import tool in-sync with NTP?

Amazon’s S3 - home to the VMs in your VM library is sensitive to machine clock timestamps. If the system clock of the machine that is using the upload tool is not set accurately the upload will fail on 1% progress. To get around this issue, sync the system clock to pool.ntp.org

Do you need a proxy to connect to internet?

If your environment requires a proxy to be able to access the internet, you will need configure proxy’s IP in Ravello’s VM Import tool before you can upload. Instructions on how to setup proxy is available in Ravello’s knowledge base.

Are you unable to login to the VM Import tool?

Ungraceful shutdowns, hibernation of the machine running VM Import tool in the middle of an upload has the potential to corrupt the tool database. This can lead to user being unable to login to the tool. To work around this issue, one needs to reset the tool by -

  1. Stopping the Ravello’s VM Import Tool service - On Windows, run services.msc and stop the Ravello VM Import Tool service. On Mac, close the ravello-vm-import-server.
  2. Browse to following folder location and delete all the files with json extension (*.json)
    1. Windows - C:\Windows\Temp\.ravello\
    2. Mac - /Users/<name>/.ravello/
  3. Restart the Ravello’s VM Import Tool service
Is the progress on VM Import Tool stuck at 1%?

The VM Import Tool uploads in chunks. Until the first chunk is fully uploaded the progress bar shows 1%, and as more chunks are uploaded the progress bar gets updated. On slow links, the progress bar show 1% for a long time until the first chunk is uploaded - and then suddenly jump to a higher percentage. If you are concerned that VM Import is not uploading properly, take a peek into the logs to confirm that upload is in progress.

VM Import Tool Logs

Feeling adventurous, and want to explore what is going on behind the scenes? Read on.

1. With your Ravello VM Import Tool running, type http://127.0.0.1:8881/hello in your web-browser

2. Your browser should display the location where the log file for VM Import Tool is being stored

image03

3. On Windows the default location is C:\Windows\Temp\.ravello\store.log and on Mac OSX the default location is /Users/<name>/.ravello/store.log

image04

4. Open the store.log in Notepad (Windows) or Console (Mac). Following snippet indicates that upload has started

image00

5. Following snippet indicates that upload has completed

image01

If your upload issues persist, please reach out to Ravello Support team with a copy of your store.log, and we will be happy to help.

The post Ravello VM Import Tool – Best Practices and Tips appeared first on The Ravello Blog.

VMware ESXi on Ravello – beta tech chat with product team. Episode #4

$
0
0

podcast

We held our VMware ESXi on Ravello tech chat earlier this week - we will soon update this post with the recording. You can also sign up for our next product tech chat and join us for an interactive discussion with the product team.

Some key questions posed during the tech chat

We are VMware partner company. How can I test my VMware virtual appliance?

Ravello can help TAP partners test their VMware appliance. There are two options -

  1. Run your appliance on top of ESXi running on Ravello - needed when the appliances need to interface with ESXi for any reason - data replication, networking, security
  2. Run natively on Ravello - can be used for most appliances that do not need to interface with ESXi.

 

What are the uses of home lab on Ravello?

People need home labs for variety of reasons - stay current with technology, play with partner products etc. There are three ways to have a home lab using Ravello -

  1. All components run on Ravello - ESXi nodes, vCenter, Shared storage
  2. vCenter/vSphere at home and couple of ESXi compute nodes on Ravello
  3. vCenter & some ESXi compute nodes at home. Additional ESXi nodes needed for burst capacity on Ravello

For scenarios 2 & 3 - two options exist -

  • No VPN in the environment - one needs to open certain ports between data center and Ravello
  • Set VPN devices on both sides - data center & Ravello

Could you please explain how Ravello’s pricing works?

Ravello's pricing calculator is available at www.ravellosystems.com/pricing. Plug in the total vCPUs, RAM and storage needed across your environment ESXi, vSphere, shared storage NFS onto calculator. Choose networking tier and whether you want to run it as a cost optimized or performance optimized. As an example an environment with 12 vCPUs, 24 GB RAM, 200 GB storage will cost between $0.91 - $1.75/hour.

How do I use use Ravello to study for VMware certification - VCPE?

Ravello is a very convenient tool for building home labs that can be used to practice for VMware certification exams. Best way to study for VMware certification - build out the exercises in the study guide on Ravello based ESXi home lab.

Does Ravello also run OpenStack?

OpenStack runs on Ravello. Ravello can be used to run multiple OpenStack versions (havana, juno etc.) with hardware acceleration. There are multiple OpenStack blueprints available that can be transferred to your account.

The post VMware ESXi on Ravello – beta tech chat with product team. Episode #4 appeared first on The Ravello Blog.

Software training portal: creating courses, classes and student labs using Ravello

$
0
0

Virtual Training in the Cloud

To make the set-up, administration and delivery of classroom training, instructor-led training and self-paced training sessions, Ravello created an easily configurable training portal. The training portal is the hub for setting up and administering as many courses and specific instances of these courses as required, by accessing pre-created application blueprints, and providing each student an isolated environment of the relevant applications. In this post I will quickly go through the steps for creating and delivering training using Ravello.

Accessing the training portal

The training portal is a VM running in Ravello. Currently users who want to use it can let Ravello’s support know that they require it via a support ticket and the VM is then copied to the user’s library. Very soon the training portal VM will be made available publicly without needing a support ticket.

To set up your own portal - drag and drop the training portal VM onto the canvas and publish it. You will need the this VM to run for the duration of any class that is running using the portal.

Once the portal VM (application is published you can browse to it, by using the URL that is provided for it in the summary tab of the VM.

Giving a trainer access

The next step is to create a trainer entity (you can have several of those). Using the admin credentials that are defined on the training portal VM (and are communicated in the description of the VM in the UI). For the trainer’s Ravello credentials you will need to use an existing Ravello user account.

Creating a course

The next step is creating a course. Using the trainer’s credentials you will set up the flow of a training, the set of blueprints that will be available to students. In this step, it is this set of blueprints that is used as the basis for creating the different labs in the course. For example, you can use several pre-configured blueprints of the environment in different stages to create the sections of the full flow of a course.

Creating classes (instances of the course)

Finally it’s time to create a class. A class is the set of students who will go through the course. That means that if we have a course, say NetScaler VPX 101, we can create as many classes around it as we’d like. For example: NetScaler Channel Partners June 2015, VPX End Users July 2015, and so on. Each class is defined by the students that are a part of it. When you create the class with the student entities in it, you can also configure the permissions each student will have for each of the blueprints included. When it’s time for the training session itself - you will need to start the relevant student applications.

Running student labs

Now that a class is defined with all relevant students, the students can log in and spin up the first section of the class using the relevant pre-configured blueprint that is available to them through the training portal.

Use cases

Ravello users utilize the training portal for two main use cases. The first is the classic (virtual) training class, or the instructor led training (ILT). Another important use case is user conferences and channel partner trainings, where software and virtual appliance vendors run hands on sessions for their channel partners, end users, sales engineer trainings and more.

If you need help creating your first course and class - drop us a line, and we'll get you started.

The post Software training portal: creating courses, classes and student labs using Ravello appeared first on The Ravello Blog.

Continuous integration testing with ESXi labs on AWS and Google cloud for StacksWare VDI software asset management product

$
0
0

AWS-ESXi

Author:
Vivek Nair
Interests include geeking out about virtualization & building complex distributed systems. Previously worked at Asana and on the Local Law Enforcement team at Palantir Technologies. Currently a lecturer in the CS department at Stanford University

This post summarizes how StacksWare, an agentless software asset management product for VDI, has implemented CI testing of their product on ESXi infrastructure test beds. StacksWare development team has modeled complex production ESXi infrastructures in Ravello service, without the overhead of buying their own rack servers or paying for expensive managed data centers.

StacksWare Introduction

StacksWare is an agentless software asset management solution for virtual desktop infrastructure (VDI). Our product allows organizations to track their application usage and determine whether they are in compliance with their license agreements. Our software passively gathers application usage via VMware’s ESXi hypervisor and Horizon View, eliminating the need to install an intrusive agent on every guest OS.

StacksWare Diagram BigWhy should you care if your organization’s virtual infrastructure is out of compliance? Audits by software vendors have been steadily increasing. According to recent research, over half of all enterprise organizations have been audited in the past two years. If found out of compliance, organizations can be subjected to large penalty fines or even heavy litigation. With StacksWare, organizations can easily monitor and ensure their compliance.

Currently, existing license management solutions require organizations to install a monitoring agent on each and every guest OS to collect application data. This introduces data privacy and infrastructure concerns. StacksWare only requires organizations to install a virtual appliance, the StacksWare Internal Monitor (SIM), into their VDI environment. SIM allows organizations to track application data within minutes. Our software is currently the only agentless license management solution on the market.

Search for scalable, on-demand VMWare ESXi lab/test environment

Enterprises of varying scale rely on StacksWare to track their application usage. For example, some of our customers provide several thousand desktops to their employees. To support both SMBs and large enterprises, we needed a flexible solution for benchmarking StacksWare’s performance.

Before Ravello we rolled our own commodity rack servers to run VMware ESXI. Aside from the headache of setup and maintenance, we couldn’t easily automate and schedule our own hardware. Providing a flexible test infrastructure to benchmark SMB infrastructures (~20 ESXI hosts) and then against large infrastructures (>75 ESXI hosts) was totally infeasible due to memory and compute constraints. After that painful experience, we decided to try out Rackspace’s managed colocation service for ESXi. Though they provided services for quick hardware scalability in their SLA, the price tag was prohibitively expensive. We also found that managed colocation solutions were often hosted in multi-tenant environments that degraded performance and stability.

Running VMware ESXi hosts on AWS and Google Cloud with Ravello Systems

I heard from fellow entrepreneurs that a Sequoia-backed company Ravello Systems provided nested virtualization with a flexible pay-as-you-go model for VMware ESXi labs on public clouds. At first I was nervous that the ESXi beta version would not be full-featured enough to model mature production ecosystems like the VDI environments of major universities with thousands of nodes. I had plenty of questions for Ravello’s infrastructure.

  • What if our customers are using hosts with VMXNet3 network drivers? Does the product have the tooling to support multiple drivers?
  • How can we model a tight firewall network in their cluster?
  • How difficult is it to configure shared storage systems like NFS or iSCSI or vSAN across the cluster?

Within minutes of playing around with its functionality, I learned that Ravello provides a rich ecosystem for modeling sophisticated production environments in minutes. This was definitely the solution that we were looking for.

Our Current Continuous Integration Testing Process

Our team went to work building a development pipeline in Ravello that made sense. With their development API (https://www.ravellosystems.com/developers/rest-api), we developed an efficient process to test any additional features for our virtual appliance with just a few Python scripts.

  1. When we create a commit in our Github repository, our continuous integration service creates a job to spin up a Ravello blueprint, using their Python SDK.
  2. Once the ESXi hosts in Ravello have fully booted up from the blueprint, we use the vSphere APIs to programmatically deploy a new StacksWare internal monitor through the vCenter that’s managing these ESXi hosts.
  3. Once the build finishes, we propagate any errors to our continuous integration service and spin down the blueprint. This ensures that we don’t use any more resources than we need.
  4. If there are no errors in the build process, we then repeat steps 2 and 3 with the “next tier” blueprint. More information about blueprint tiering in the next section.

Blueprint Tiering for multiple test scenarios

We created different tiers of Ravello blueprints to test basic functionality and performance during this build process.

  • The first blueprint sanity checks the basic Ravello functionality across three different ESXi hosts. We catch most bugs without expending time and money with this configuration.
  • The second blueprint models a standard SMB infrastructure with 20 ESXi hosts.
  • The third blueprint models a larger enterprise infrastructure with 50 ESXi hosts.

If the build system successfully builds the latest StacksWare commit against these three blueprints, the code is merged to our master Github branch and the virtual appliance is ready for deployment!

Enterprise-Sizes

Summary

StacksWare is an agentless software asset management tool for virtual desktop infrastructures such as VMware ESXi. We use Ravello Systems service to model complex production ESXi infrastructures without the overhead of buying our own rack servers or paying for expensive managed data centers. With Ravello’s full-featured developer API, we’re able to automate and schedule our build process to test for optimal functionality and performance across infrastructures of varying scale such as SMB ESXi clusters and sophisticated enterprise data centers.

Set up your ESXi lab on AWS by signing up for the beta and you will be able to run your own test labs right away.

The post Continuous integration testing with ESXi labs on AWS and Google cloud for StacksWare VDI software asset management product appeared first on The Ravello Blog.

Viewing all 333 articles
Browse latest View live