Microsoft Virtualisation Blog

Subscribe to Microsoft Virtualisation Blog feed
Information and announcements from Program Managers, Product Managers, Developers and Testers in the Microsoft Virtualization team.
Updated: 5 hours 23 min ago

A great way to collect logs for troubleshooting

Fri, 10/27/2017 - 23:30

Did you ever have to troubleshoot issues within a Hyper-V cluster or standalone environment and found yourself switching between different event logs? Or did you repro something just to find out not all of the important Windows event channels had been activated?

To make it easier to collect the right set of event logs into a single evtx file to help with troubleshooting we have published a HyperVLogs PowerShell module on GitHub.

In this blog post I am sharing with you how to get the module and how to gather event logs using the functions provided.

Step 1: Download and import the PowerShell module

First of all you need to download the PowerShell module and import it.

Step 2: Reproduce the issue and capture logs

Now, you can use the functions provided as part of the module to collect logs for different situations.
For example, to investigate an issue on a single node, you can collect events with the following steps:

Using this module and its functions made it a lot easier for me to collect the right event data to help with investigations. Any feedback or suggestions are highly welcome.

Cheers,
Lars

Categories: Microsoft, Virtualisation

Container Images are now out for Windows Server version 1709!

Wed, 10/18/2017 - 19:30

With the release of Windows Server version 1709 also come Windows Server Core and Nano Server base OS container images.

It is important to note that while older versions of the base OS container images will work on a newer host (with Hyper-V isolation), the opposite is not true. Container images based on Windows Server version 1709 will not work on a host using Windows Server 2016.  Read more about the different versions of Windows Server.

We’ve also made some changes to our tagging scheme so you can more easily specify which version of the container images you want to use.  From now on, the “latest” tag will follow the releases of the current LTSC product, Windows Server 2016. If you want to keep up with the latest patches for Windows Server 2016, you can use:

“microsoft/nanoserver”
or
“microsoft/windowsservercore”

in your dockerfiles to get the most up-to-date version of the Windows Server 2016 base OS images. You can also continue using specific versions of the Windows Server 2016 base OS container images by using the tags specifying the build, like so:

“microsoft/nanoserver:10.0.14393.1770”
or
“microsoft/windowsservercore:10.0.14393.1770”.

If you would like to use base OS container images based on Windows Server version 1709, you will have to specify that with the tag. In order to get the most up-to-date base OS container images of Windows Server version 1709, you can use the tags:

“microsoft/nanoserver:1709”
or
“microsoft/windowsservercore:1709”

And if you would like a specific version of these base OS container images, you can specify the KB number that you need on the tag, like this:

“microsoft/nanoserver:1709_KB4043961”
or
“microsoft/windowsservercore:1709_KB4043961”.

We hope that this tagging scheme will ensure that you always choose the image that you want and need for your environment. Please let us know in the comments if you have any feedback for us.

Note: We currently do not intend to use the build numbers to specify Windows Server version 1709 container images. We will only be using the KB schema specified above for the tagging of these images. Let us know if you have feedback about this as well

Regards,
Ender

Categories: Microsoft, Virtualisation

Docker’s routing mesh available with Windows Server version 1709

Tue, 09/26/2017 - 05:04

The Windows Core Networking team, along with our friends at Docker, are thrilled to announce that support for Docker’s ingress routing mesh will be supported with Windows Server version 1709.

Ingress routing mesh is part of swarm mode–Docker’s built-in orchestration solution for containers. Swarm mode first became available on Windows early this year, along with support for the Windows overlay network driver. With swarm mode, users have the ability to create container services and deploy them to a cluster of container hosts. With this, of course, also comes the ability to define published ports for services, so that the apps that those services are running can be accessed by endpoints outside of the swarm cluster (for example, a user might want to access a containerized web service via web browser from their laptop or phone).

To place routing mesh in context, it’s useful to understand that Docker currently provides it, along with another option for publishing services with swarm mode–host mode service publishing:*

  • Host mode is an approach to service publishing that’s optimal for production environments, where system administrators value maximum performance and full control over their container network configuration. With host mode, each container of a service is published directly to the host where it is running.
  • Routing mesh is an approach to service publishing that’s optimized for the developer experience, or for production cases where a simple configuration experience is valued above performance, or control over how incoming requests are routed to the specific replicas/containers for a service. With ingress routing mesh, the containers for a published service, can all be accessed through a single “swarm port”–one port, published on every swarm host (even the hosts where no container for the service is currently running!).

While our support for routing mesh is new with Windows Server version 1709, host mode service publishing has been supported since swarm mode was originally made available on Windows. 

*For more information, on how host mode and routing mesh work, visit Docker’s documentation on routing mesh and publishing services with swarm mode.

So, what does it take to use routing mesh on Windows? Routing mesh is Docker’s default service publishing option. It has always been the default behavior on Linux, and now it’s also supported as the default on Windows! This means that all you need to do to use routing mesh, is create your services using the --publish flag to the docker service create option, as described in Docker’s documentation.

For example, assume you have a basic web service, defined by a container image called, web-frontend. If you wanted to publish this service to port 80 of each container and port 8080 of all of your swarm nodes, you’d create the service with a command like this:

C:\> docker service create --name web --replicas 3 --publish 8080:80 web-frontend

In this case, the web app, running on a pre-configured swarm cluster along with a db backend service, might look like the app depicted below. As shown, because of routing mesh clients outside of the swarm cluster (in this example, web browsers) are able to access the web service via its published port–8080. And in fact, each client can access the web service via its published port on any swarm host; no matter which host receives an original incoming request, that host will use routing mesh to route the request to a web container instance that can ultimately service that request.

Once again, we at Microsoft and our partners at Docker are proud to make ingress mode available to you on Windows. Try it out on Windows Server version 1709, and using Docker EE Preview*, and let us know what you think! We appreciate your engagement and support in making features like routing mesh possible, and we encourage you to continue reaching out with feedback. Please provide your questions/comments/feature requests by posting issues to the Docker for Windows GitHub repo or by emailing the Windows Core Networking team directly, at sdn_feedback@microsoft.com.

*Note: Ingress mode on Windows currently has the following system requirements:

 

Categories: Microsoft, Virtualisation

Delivering Safer Apps with Windows Server 2016 and Docker Enterprise Edition

Tue, 09/05/2017 - 09:00

Windows Server 2016 and Docker Enterprise Edition are revolutionizing the way Windows developers can create, deploy, and manage their applications on-premises and in the cloud. Microsoft and Docker are committed to providing secure containerization technologies and enabling developers to implement security best practices in their applications. This blog post highlights some of the security features in Docker Enterprise Edition and Windows Server 2016 designed to help you deliver safer applications.

For more information on Docker and Windows Server 2016 Container security, check out the full whitepaper on Docker’s site.

Introduction

Today, many organizations are turning to Docker Enterprise Edition (EE) and Windows Server 2016 to deploy IT applications consistently and efficiently using containers. Container technologies can play a pivotal role in ensuring the applications being deployed in your enterprise are safe — free of malware, up-to-date with security patches, and known to come from a trustworthy source. Docker EE and Windows each play a hand in helping you develop and deploy safer applications according to the following three characteristics:

  1. Usable Security: Secure defaults with tooling that is native to both developers and operators.
  2. Trusted Delivery: Everything needed to run an application is delivered safely and guaranteed not to be tampered with.
  3. Infrastructure Independent: Application and security configurations are portable and can move between developer workstations, testing environments, and production deployments regardless of whether those environments are running in Azure or your own datacenter.

Usable Security Resource Isolation

Windows Server 2016 ships with support for Windows Server Containers, which are powered by Docker Enterprise Edition. Docker EE for Windows Server is the result of a joint engineering effort between Microsoft and Docker. When you run a Windows Server Container, key system resources are sandboxed for each container and isolated from the host operating system. This means the container does not see the resources available on the host machine, and any changes made within the container will not affect the host or other containers. Some of the resources that are isolated include:

  • File system
  • Registry
  • Certificate stores
  • Namespace (privileged API access, system services, task scheduler, etc.)
  • Local users and groups

Additionally, you can limit a Windows Server Container’s use of the CPU, memory, disk usage, and disk throughput to protect the performance of other applications and containers running on the same host.

Hyper-V Isolation

For even greater isolation, Windows Server Containers can be deployed using Hyper-V isolation. In this configuration, the container runs inside a specially optimized Hyper-V virtual machine with a completely isolated Windows kernel instance. Docker EE handles creating, managing, and deleting the VM for you. Better yet, the same Docker container images can be used for both process isolated and Hyper-V isolated containers, and both types of containers can run side by side on the same host.

Application Secrets

Starting with Docker EE 17.06, support for delivering secrets to Windows Server Containers at runtime is now available. Secrets are simply blobs of data that may contain sensitive information best left out of a container image. Common examples of secrets are SSL/TLS certificates, connection strings, and passwords.

Developers and security operators use and manage secrets in the exact same way — by registering them on manager nodes (in an encrypted store), granting applicable services access to obtain the secrets, and instructing Docker to provide the secret to the container at deployment time. Each environment can use unique secrets without having to change the container image. The container can just read the secrets at runtime from the file system and use them for their intended purposes.

Trusted Delivery Image Signing and Verification

Knowing that the software running in your environment is authentic and came from a trusted source is critical to protecting your information assets. With Docker Content Trust, which is built into Docker EE, container images are cryptographically signed to record the contents present in the image at the time of signing. Later, when a host pulls the image down, it will validate the signature of the downloaded image and compare it to the expected signature from the metadata. If the two do not match, Docker EE will not deploy the image since it is likely that someone tampered with the image.

Image Scanning and Antimalware

Beyond checking if an image has been modified, it’s important to ensure the image doesn’t contain malware of libraries with known vulnerabilities. When images are stored in Docker Trusted Registry, Docker Security Scanning can analyze images to identify libraries and components in use that have known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database.

Further, when the image is pulled on a Windows Server 2016 host with Windows Defender enabled, the image will automatically be scanned for malware to prevent malicious software from being distributed through container images.

Windows Updates

Working alongside Docker Security Scanning, Microsoft Windows Update can ensure that your Windows Server operating system is up to date. Microsoft publishes two pre-built Windows Server base images to Docker Hub: microsoft/nanoserver and microsoft/windowsservercore. These images are updated the same day as new Windows security updates are released. When you use the “latest” tag to pull these images, you can rest assured that you’re working with the most up to date version of Windows Server. This makes it easy to integrate updates into your continuous integration and deployment workflow.

Infrastructure Independent Active Directory Service Accounts

Windows workloads often rely on Active Directory for authentication of users to the application and authentication between the application itself and other resources like Microsoft SQL Server. Windows Server Containers can be configured to use a Group Managed Service Account when communicating over the network to provide a native authentication experience with your existing Active Directory infrastructure. You can select a different service account (even belonging to a different AD domain) for each environment where you deploy the container, without ever having to update the container image.

Docker Role Based Access Control

Docker Enterprise Edition allows administrators to apply fine-grained role based access control to a variety of Docker primitives, including volumes, nodes, networks, and containers. IT operators can grant users predefined permission roles to collections of Docker resources. Docker EE also provides the ability to create custom permission roles, providing IT operators tremendous flexibility in how they define access control policies in their environment.

Conclusion

With Docker Enterprise Edition and Windows Server 2016, you can develop, deploy, and manage your applications more safely using the variety of built-in security features designed with developers and operators in mind. To read more about the security features available when running Windows Server Containers with Docker Enterprise Edition, check out the full whitepaper and learn more about using Docker Enterprise Edition in Azure.

Categories: Microsoft, Virtualisation

Hyper-V virtual machine gallery and networking improvements

Wed, 07/26/2017 - 01:42

In January, we added Quick Create to Hyper-V manager in Windows 10.  Quick Create is a single-page wizard for fast, easy, virtual machine creation.

Starting in the latest fast-track Windows Insider builds (16237+) we’re expanding on that idea in two ways.  Quick Create now includes:

  1. A virtual machine gallery with downloadable, pre-configured, virtual machines.
  2. A default virtual switch to allow virtual machines to share the host’s internet connection using NAT.

To launch Quick Create, open Hyper-V Manager and click on the “Quick Create…” button (1).

From there you can either create a virtual machine from one of the pre-built images available from Microsoft (2) or use a local installation source.  Once you’ve selected an image or chosen installation media, you’re done!  The virtual machine comes with a default name and a pre-made network connection using NAT (3) which can be modified in the “more options” menu.

Click “Create Virtual Machine” and you’re ready to go – granted downloading the virtual machine will take awhile.

Details about the Default Switch

The switch named “Default Switch” or “Layered_ICS”, allows virtual machines to share the host’s network connection.  Without getting too deep into networking (saving that for a different post), this switch has a few unique attributes compared to other Hyper-V switches:

  1. Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet.
  2. It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up.
  3. You can’t delete it.
  4. It has the same name and device ID (GUID c08cb7b8-9b3c-408e-8e30-5e16a3aeb444) on all Windows 10 hosts so virtual machines on recent builds can assume the same switch is present on all Windows 10 Hyper-V host.

I’m really excited by the work we are doing in this area.  These improvements make Hyper-V a better tool for people running virtual machines on a laptop.  They don’t, however, replace existing Hyper-V tools.  If you need to define specific virtual machine settings, New-VM or the new virtual machine wizard are the right tools.  For people with custom networks or complicated virtual network needs, continue using Virtual Switch Manager.

Also keep in mind that all of this is a work in progress.  There are rough edges for the default switch right now and there aren’t many images in the gallery.  Please give us feedback!  Your feedback helps us.  Let us know what images you would like to see and share issues by commenting on this blog or submitting feedback through Feedback Hub.

Cheers,
Sarah

Categories: Microsoft, Virtualisation

Copying Files into a Hyper-V VM with Vagrant

Tue, 07/18/2017 - 21:50

A couple of weeks ago, I published a blog with tips and tricks for getting started with Vagrant on Hyper-V. My fifth tip was to “Enable Nifty Hyper-V Features,” where I briefly mentioned stuff like differencing disks and virtualization extensions.

While those are useful, I realized later that I should have added one more feature to my list of examples: the “guest_service_interface” field in “vm_integration_services.” It’s hard to know what that means just from the name, so I usually call it the “the thing that lets me copy files into a VM.”

Disclaimer: this is not a replacement for Vagrant’s synced folders. Those are super convienent, and should really be your default solution for sharing files. This method is more useful in one-off situations.

Enabling Copy-VMFile

Enabling this functionality requires a simple change to your Vagrantfile. You need to set “guest_service_interface” to true within “vm_integration_services” configuration hash. Here’s what my Vagrantfile looks like for CentOS 7:

# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provider "hyperv" config.vm.network "public_network" config.vm.synced_folder ".", "/vagrant", disabled: true config.vm.provider "hyperv" do |h| h.enable_virtualization_extensions = true h.differencing_disk = true h.vm_integration_services = { guest_service_interface: true #<---------- this line enables Copy-VMFile } end end

You can check that it’s enabled by running Get-VMIntegrationService in PowerShell on the host machine:

PS C:\vagrant_selfhost\centos> Get-VMIntegrationService -VMName "centos-7-1-1.x86_64" VMName Name Enabled PrimaryStatusDescription SecondaryStatusDescription ------ ---- ------- ------------------------ -------------------------- centos-7-1-1.x86_64 Guest Service Interface True OK centos-7-1-1.x86_64 Heartbeat True OK centos-7-1-1.x86_64 Key-Value Pair Exchange True OK The protocol version of... centos-7-1-1.x86_64 Shutdown True OK centos-7-1-1.x86_64 Time Synchronization True OK The protocol version of... centos-7-1-1.x86_64 VSS True OK The protocol version of...

Note: not all integration services work on all guest operating systems. For example, this functionality will not work on the “Precise” Ubuntu image that’s used in Vagrant’s “Getting Started” guide. The full compatibility list various Windows and Linux distrobutions can be found here. Just click on your chosen distrobution and check for “File copy from host to guest.”

Using Copy-VMFile

Once you’ve got a VM set up correctly, copying files to and from arbitrary locations is as simple as running Copy-VMFile in PowerShell.

Here’s a sample test I used to verify it was working on my CentOS VM:

Copy-VMFile -Name 'centos-7-1-1.x86_64' -SourcePath '.\Foo.txt' -DestinationPath '/tmp' -FileSource Host

Full details can found in the official documentation. Unfortunately, you can’t yet use it to copy files from your VM to your host. If you’re running a Windows Guest, you can use Copy-Item with PowerShell Direct to make that work; see this document for more details.

How Does It Work?

The way this works is by running Hyper-V integration services within the guest operating system. Full details can be found in the official documentation. The short version is that integration services are Windows Services (on Windows) or Daemons (on Linux) that allow the guest operating system to communicate with the host. In this particular instance, the integration service allows us to copy files to the VM over the VM Bus (no network required!).

Conclusion

Hope you find this helpful — let me know if there’s anything you think I missed.

John Slack
Program Manager
Hyper-V Team

Categories: Microsoft, Virtualisation

Vagrant and Hyper-V — Tips and Tricks

Thu, 07/06/2017 - 22:22
Learning to Use Vagrant on Windows 10

A few months ago, I went to DockerCon as a Microsoft representative. While I was there, I had the chance to ask developers about their favorite tools.

The most common tool mentioned (outside of Docker itself) was Vagrant. This was interesting — I was familiar with Vagrant, but I’d never actually used it. I decided that needed to change. Over the past week or two, I took some time to try it out. I got everything working eventually, but I definitely ran into some issues on the way.

My pain is your gain — here are my tips and tricks for getting started with Vagrant on Windows 10 and Hyper-V.

NOTE: This is a supplement for Vagrant’s “Getting Started” guide, not a replacement.

Tip 0: Install Hyper-V

For those new to Hyper-V, make sure you’ve got Hyper-V running on your machine. Our official docs list the exact steps and requirements.

Tip 1: Set Up Networking Correctly

Vagrant doesn’t know how to set up networking on Hyper-V right now (unlike other providers), so it’s up to you to get things working the way you like them.

There are a few NAT networks already created on Windows 10 (depending on your specific build).  Layered_ICS should work (but is under active development), while Layered_NAT doesn’t have DHCP.  If you’re a Windows Insider, you can try Layered_ICS.  If that doesn’t work, the safest option is to create an external switch via Hyper-V Manager.  This is the approach I took. If you go this route, a friendly reminder that the external switch is tied to a specific network adapter. So if you make it for WiFi, it won’t work when you hook up the Ethernet, and vice versa.

Instructions for adding an external switch in Hyper-V manager

Tip 2: Use the Hyper-V Provider

Unfortunately, the Getting Started guide uses VirtualBox, and you can’t run other virtualization solutions alongside Hyper-V. You need to change the “provider” Vagrant uses at a few different points.

When you install your first box, add –provider :

vagrant box add hashicorp/precise64 --provider hyperv

And when you boot your first Vagrant environment, again, add –provider. Note: you might run into the error mentioned in Trick 4, so skip to there if you see something like “mount error(112): Host is down”.

vagrant up --provider hyperv Tip 3: Add the basics to your Vagrantfile

Adding the provider flag is a pain to do every single time you run vagrant up. Fortunately, you can set up your Vagrantfile to automate things for you. After running vagrant init, modify your vagrant file with the following:

Vagrant.configure(2) do |config| config.vm.box = "hashicorp/precise64" config.vm.provider "hyperv" config.vm.network "public_network" end

One additional trick here: vagrant init will create a file that will appear to be full of commented out items. However, there is one line not commented out:

There is one line not commented.

Make sure you delete that line! Otherwise, you’ll end up with an error like this:

Bringing machine 'default' up with 'hyperv' provider... ==> default: Verifying Hyper-V is enabled... ==> default: Box 'base' could not be found. Attempting to find and install... default: Box Provider: hyperv default: Box Version: >= 0 ==> default: Box file was not detected as metadata. Adding it directly... ==> default: Adding box 'base' (v0) for provider: hyperv default: Downloading: base default: An error occurred while downloading the remote file. The error message, if any, is reproduced below. Please fix this error and try again. Trick 4: Shared folders uses SMBv1 for hashicorp/precise64

For the image used in the “Getting Started” guide (hashicorp/precise64), Vagrant tries to use SMBv1 for shared folders. However, if you’re like me and have SMBv1 disabled, this will fail:

Failed to mount folders in Linux guest. This is usually because the "vboxsf" file system is not available. Please verify that the guest additions are properly installed in the guest and can work properly. The command attempted was: mount -t cifs -o uid=1000,gid=1000,sec=ntlm,credentials=/etc/smb_creds_e70609f244a9ad09df0e760d1859e431 //10.124.157.30/e70609f244a9ad09df0e760d1859e431 /vagrant The error output from the last command was: mount error(112): Host is down Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

You can check if SMBv1 is enabled with this PowerShell Cmdlet:

Get-SmbServerConfiguration

If you can live without synced folders, here’s the line to add to the vagrantfile to disable the default synced folder.

config.vm.synced_folder ".", "/vagrant", disabled: true

If you can’t, you can try installing cifs-utils in the VM and re-provision. You could also try another synced folder method. For example, rsync works with Cygwin or MinGW. Disclaimer: I personally didn’t try either of these methods.

Tip 5: Enable Nifty Hyper-V Features

Hyper-V has some useful features that improve the Vagrant experience. For example, a pretty substantial portion of the time spent running vagrant up is spent cloning the virtual hard drive. A faster way is to use differencing disks with Hyper-V. You can also turn on virtualization extensions, which allow nested virtualization within the VM (i.e. Docker with Hyper-V containers). Here are the lines to add to your Vagrantfile to add these features:

config.vm.provider "hyperv" do |h| h.enable_virtualization_extensions = true h.differencing_disk = true end

There are a many more customization options that can be added here (i.e. VMName, CPU/Memory settings, integration services). You can find the details in the Hyper-V provider documentation.

Tip 6: Filter for Hyper-V compatible boxes on Vagrant Cloud

You can find more boxes to use in the Vagrant Cloud (formally called Atlas). They let you filter by provider, so it’s easy to find all of the Hyper-V compatible boxes.

Tip 7: Default to the Hyper-V Provider

While adding the default provider to your Vagrantfile is useful, it means you need to remember to do it with each new Vagrantfile you create. If you don’t, Vagrant will trying to download VirtualBox when you vagrant up the first time for your new box. Again, VirtualBox doesn’t work alongside Hyper-V, so this is a problem.

PS C:\vagrant> vagrant up ==> Provider 'virtualbox' not found. We'll automatically install it now... The installation process will start below. Human interaction may be required at some points. If you're uncomfortable with automatically installing this provider, you can safely Ctrl-C this process and install it manually. ==> Downloading VirtualBox 5.0.10... This may not be the latest version of VirtualBox, but it is a version that is known to work well. Over time, we'll update the version that is installed.

You can set your default provider on a user level by using the VAGRANT_DEFAULT_PROVIDER environmental variable. For more options (and details), this is the relevant page of Vagrant’s documentation.

Here’s how I set the user-level environment variable in PowerShell:

[Environment]::SetEnvironmentVariable("VAGRANT_DEFAULT_PROVIDER", "hyperv", "User")

Again, you can also set the default provider in the Vagrant file (see Trick 3), which will prevent this issue on a per project basis. You can also just add --provider hyperv when running vagrant up. The choice is yours.

Wrapping Up

Those are my tips and tricks for getting started with Vagrant on Hyper-V. If there are any you think I missed, or anything you think I got wrong, let me know in the comments.

Here’s the complete version of my simple starting Vagrantfile:

# -*- mode: ruby -*- # vi: set ft=ruby : # All Vagrant configuration is done below. The "2" in Vagrant.configure # configures the configuration version (we support older styles for # backwards compatibility). Please don't change it unless you know what # you're doing. Vagrant.configure("2") do |config| config.vm.box = "hashicorp/precise64" config.vm.provider "hyperv" config.vm.network "public_network" config.vm.synced_folder ".", "/vagrant", disabled: true config.vm.provider "hyperv" do |h| h.enable_virtualization_extensions = true h.differencing_disk = true end end
Categories: Microsoft, Virtualisation