Helge Klein

Subscribe to Helge Klein feed
Tools for IT Pros
Updated: 1 day 1 hour ago

Is Blockchain the Right Technology for My Application?

Mon, 03/19/2018 - 14:23

This is an attempt at guidelines to help in technology decisions about when and where to use blockchain technology.

Blockchain in a Nutshell

A blockchain is a distributed list more than one entity can add new elements to. Each list element cryptographically validates its predecessor. Combined with the fact that adding new elements is computationally expensive, this protects the list’s integrity.

chain chain chain by lisa cee under CC

Pros and Cons of the Blockchain Architecture

Let’s start with what blockchain is good at:

  • Due to the distributed nature, there is no dependency on any individual entity for the management of the data
  • Tampering with the data is uneconomical because of the huge resources that would be required

And here are the caveats:

  • Maintaining the list is highly inefficient due to the tamper-proof architecture
  • The throughput in transactions per second is very low compared to a database
  • Storage requirements can be high because verification of the list’s integrity requires access to all blocks
When to Use Blockchain Technology

From the above, we can deduce where blockchain is a good technology choice.

Trust and Accountability

Legal contracts allow for pretty efficient management of accountability and – indirectly – trust. Making a system’s technology tamper-proof is not necessary when the risks of tampering are too high for the parties involved.

Takeaway: blockchain technology only makes sense where traditional legal instruments are insufficient or not applicable.

Dependencies on Elements Outside the Blockchain

If you cannot move the entire data set including all dependencies to the blockchain, the mechanisms that protect the blockchain’s integrity are wasted. People will simply cheat elsewhere in the process.

Takeaway: a blockchain must be self-sufficient and not rely on external data.

Processing Speed and Efficiency

Blockchain technology incurs a high computational overhead. Compared to databases, blockchains are inefficient and slow.

Takeaway: blockchains are not replacements for databases.

Summary

Putting the hype aside, there seem to be very few use cases where blockchains would genuinely be the best technology choice. Bitcoin seems like a good fit but is hampered by the low slow transaction speed and huge (energy) inefficiencies.

The post Is Blockchain the Right Technology for My Application? appeared first on Helge Klein.

Building a Fast and Silent Workstation PC

Tue, 01/16/2018 - 14:15

This article describes how to build a fast workstation PC that is almost completely silent (actually the fastest possible in terms of single-thread performance). It is based on a PC build published by German c’t magazine.

Why Single-Thread Performance is (Nearly) the Only Thing That Matters

There are many ways to evaluate CPU performance, but it basically boils down to a simple question: am I interested in single-thread or multi-thread performance?

The latter obviously yields a higher number and many benchmarks focus on the aggregate performance of all the cores in the CPU combined. But in a single-user machine multi-thread performance only rarely matters. The vast majority of software uses but a single thread (aka CPU core) for any significant calculations. That means in practice: if you want things to be done quickly, you need a CPU that has a high single-thread performance.

Multi-thread performance, while probably overrated, is not unimportant. There are quite a few applications out there that do use multiple threads at least part of the time to speed things up.

With the above in mind, we can define our ideal CPU: highest possible single-thread performance with good multi-thread performance. As it turns out, that CPU is currently Intel’s i7-8700K. No other x86 CPU matches its single-core speed, and with six cores total it is a more than decent multi-core contender, too.

Component Selection

Once I knew I wanted the i7-8700K I went to look for ways to run it at next to zero decibels. This is where c’t magazine comes in, probably the world’s only truly fantastic computer magazine left. c’t regularly publishes PC builds that focus on low noise emissions and energy efficiency. One of the best aspects of their builds is that they ignore unnecessarily complex techniques like water cooling or insulation, opting for noise reduction at the source instead. They basically put the fans in the right place of the chassis and test the hell out of the components to make sure none of them emit any unwanted noise.

Based on their suggestions I assembled the following components for my workstation PC build:

CPU:
Intel i7-8700K
Mainboard:
MSI Z370 Gaming Pro Carbon
RAM:
4 x Crucial DDR4 16 GB PC4-19200 non-ECC (64 GB in total)
SSD 1:
Samsung SSD 960 Pro M.2
SSD 2:
Samsung SSD 850 Pro (data disk from my previous machine)
CPU cooler:
Thermalright Macho Rev.B
Power supply:
be quiet! Pure Power 10 400 Watt
Case:
be quiet! Pure Base 600
GPU:
MSI GeForce GTX 1060 Gaming X 6G 6GB

A “gaming” mainboard would not have been my natural choice. Neither do I really need a dedicated GPU. Unfortunately, none of the currently available mainboards for Intel’s 8th generation (Coffee Lake) CPUs are equipped with two DisplayPorts. As we will see below, this choice of mainboard and GPU negatively affects power consumption.

Assembling the Components

The actual build is pretty straightforward. However, you should move the front fan (which is basically useless where it is located) to the back of the case’s top, so that the heat is blown out to the top as well as to the back (by the second preinstalled fan). The case’s top plastic cover can easily be shortened with a saw so that it does not cover the new top-blowing fan.

Here is a view from an intermediate stage before the mainboard was put in the case:

BIOS Settings

First of all, update the BIOS to the latest version, then load the defaults. With that in place, configure the following settings:

  • Settings > Advanced > Windows OS Configuration > Windows 8.1/10 WHQL Support: enabled
  • Overclocking > CPU Features > Intel C-State: enabled
  • Overclocking > CPU Features > Package C-State Limit: C8
  • Overclocking > CPU Features > C1E Support: enabled
  • Hardware Monitor -> CPU 1: 40°/15%, 60°/30%, 70°/60%, 85°/100%
  • Hardware Monitor -> System 1: 30°/4.08V, 60°/5.04V, 70°/7.08V, 85°/12V
  • Hardware Monitor -> System 4: 30°/4.08V, 60°/5.04V, 70°/7.08V, 85°/12V
Power Consumption

A modern PC’s power consumption is highly volatile, changing many times per second depending on the workload. The most important value, however, is idle power consumption, because no matter how furiously you work, your machine will be idling a large part of the time.

There have been great improvements in idle power consumption in the past years. As an example, c’t published a build for an 11 Watt PC in December 2016. This i7-8700K build, unfortunately, is not as efficient. The best I observed is a little more than 36 Watts. The typical idle power consumption is around 40 Watts.

I blame Intel’s Z370 chipset, the only chipset currently available for Intel’s 8th generation (Coffee Lake) CPUs. According to other people’s measurements, the GPU consumes a bit less than 10 Watts when idle. Interestingly, it does not matter whether you connect one or two (4K) displays to the GPU. There is not even a significant change in power consumption if you connect a single display to the mainboard’s Intel graphics instead.

One caveat to be aware of: in sleep mode, the system consumes 12.6 Watts, which is about 12 Watts more than it should. This is the case even with the ErP Ready BIOS setting enabled, which is supposed to make the system conform to the EU’s environmental regulations. I worked around this inefficiency by configuring Windows to hibernate when the power button is pressed instead of sleeping. During hibernation the system only consumes 0.2 Watts.

Noise Emissions / Silence

I do not have the equipment to measure noise emissions, but I can say that the machine is nearly completely silent. Only when there is no ambient noise at all can a very unobtrusive mid-frequency ventilator humming be heard. This does not even change during prolonged periods of high load.

CPU Performance

There are many different ways to measure CPU performance. I find Cinebench to be a useful indicator. It has both single-core and multi-core benchmarks. The single-core result of 203 is even slightly higher than expected. The multi-core result of 1414 is only surpassed by some of AMD’s Ryzen CPUs and by Intel’s expensive i9 processors. Cinebench results were measured before any Meltdown/Spectre patches were applied, by the way.

The post Building a Fast and Silent Workstation PC appeared first on Helge Klein.

Hyper-V Backup – Products & Options

Wed, 01/10/2018 - 14:14

It’s 2018, and backing up Hyper-V hosts is solved, right? I certainly expected it to be when I took on the task of finding a simple and reliable way to back up three hosts colocated in a large datacenter. This article summarizes my journey and the findings resulting from it.

Backup Products

It became immediately obvious that Windows Server 2016 does not come with any useful backup functionality for Hyper-V. Looking for commercial products, I found the following three candidates looking promising:

  • Altaro Backup 7.5
  • Nakivo Backup & Replication 7.3
  • Veeam Backup & Replication 9.5 Update 2

There are other products out there, of course, but looking at their websites none of them seemed enticing enough for me to include them in a trial.

My Requirements

My expectations for a Hyper-V backup product are moderate enough, I believe. I do not want to become a full-time backup administrator. I have precious little time as it is. The products we use for our IT need to “just work”. At least, that is the theory.

Requirement #1: Good UX

First of all, I need a backup product to be easy to install and use. The UI should ideally be self-documenting so it can be used without studying manuals, FAQs and KB articles.

Requirement #2: Reliability

Secondly, a backup software needs to do its job, reliably, every single day. Backups are fail-safes. It cannot have the fail-safes fail.

Requirement #3: Efficiency

This is more of a nice-to-have than a hard requirement, but it would sure be nice to have a product that does its job swiftly while efficiently deduplicating the backup storage. Also, a light agent and a small installation footprint are always a plus.

Where to Back Up To? Backup Target Option #1: SMB Share

I intended to store the backups on an SMB file share which the company that is hosting the servers is offering at low cost. I had used that type of SMB share for manual backup jobs for several years.

Backup Target Option #2: Dedicated Backup Server

Some products offer to host the backups on a dedicated backup server. That was never an option for me. Our environment is relatively small, and standing up and managing another physical server just to host backups was out of the question. I was looking for a simple, elegant solution, not additional complexity.

Backup Target Option #3: Cloud Storage

Once you think about it, this seems to be the obvious choice, at least for Hyper-V hosts located in a datacenter with high-speed internet connectivity (like ours). Storing (encrypted) backups in Amazon S3 or Azure Storage solves multiple problems at once: the backups are located in a physically different site, vastly improving disaster recoverability while at the same time reducing the dependency on one hosting provider. With backups stored in the cloud moving VMs between datacenters becomes so much easier.

The reason why I did not choose this option is as sad as it is simple: none of the products support cloud storage as primary backup targets. This may (and probably will) change in future versions, but today not even Microsoft’s Azure Backup Server has the capability to send backups directly to Azure. As the docs state:

In the current architecture of Azure Backup Server, the Azure Backup vault holds the second copy of the data while the local storage holds the first (and mandatory) backup copy.

Notes From My Product Evaluations

Following are the notes from my trial installations of the three products mentioned earlier, two of which I used for several weeks each before writing this.

Veeam Backup & Replication 9.5 Update 2 First Attempt

This was the first product I tried, simply because Veeam is the largest and most well-known brand out there. During my initial installation attempt, I was a little shocked about the huge 2 GB download. When I was finally presented with the product’s console, I could not find any way to add an SMB share as a backup target. As it turned out in my second attempt several weeks later, you have to locate a tiny little chevron-style button placed near the bottom of the screen:

Once you manage to find and click that, a dialog pops up that lets you enable the “backup infrastructure” tab. Why that is not enabled by default is beyond me. Veeam could probably double their sales by that simple change.

As it stands I failed to locate the magic button during my first attempt, so I uninstalled the product, which is a tedious procedure because Veeam litters the Programs and Features window with a good half dozen entries.

Second Attempt and Verdict

Only after I had tried Altaro and Nakivo and failed did I look at Veeam again. During this second run, I located and skimmed a manual which, if printed out, would dwarf any city’s phone book. My newfound knowledge enabled me to find the magic button and configure the product optimally for our needs.

Veeam Backup & Replication is certainly powerful, but unnecessarily complex. Related settings are spread across different wizards, dialogs and other parts of the UI. However, once configured, it seems to be a reliable workhorse.

Cloud storage as the primary backup target is not supported as of version 9.5. I found a note on a blog that support for object-based storage is to come with version 10, so, hopefully, that will give us direct cloud backup.

My Recommended Veeam Configuration for Hyper-V

For optimal backup consistency with a diverse set of Windows and Linux VMs, I recommend configuring Veeam so that application-consistency is attempted first, crash-consistency second.

Application-consistent backups require Veeam to trigger VSS in the VM, so admin rights in the guest are mandatory for this to work. Sometimes VSS gets confused and fails, or you do not have admin credentials for all VMs. That is when you need the crash-consistent variant as your second option, which basically backs up a VM as if it had suffered a sudden power loss. Unfortunately, this two-phased approach is not enabled by default. To configure it, you have to dig pretty deep:

Hyper-V crash-consistency needs to be specifically enabled in a different dialog. While you are there make sure to switch on changed block tracking (CBT):

Altaro Backup 7.5

With 273 MB is Altaro is a nice small download. Installation is similarly quick. The UI is so intuitive that a product manual is not required. Altaro employs the sensible backup strategy of trying application-consistent first, crash-consistent second, by default.

All in all, I was really happy with Altaro until it started failing every single time with the error The backup location contains too many bad data blocks. Error code DEDUP_064:

This happened with two different SMB shares, in the first case after two to three weeks, in the second case after only a few days. I suspect it is caused by an incompatibility with the Linux-based SMB shares our provider offers. Altaro’s support was not able to find a solution. It would be nice if they offered a tool or script to test a backup target for compatibility (which they do not). In any case, Veeam seems to be able to work with the SMB shares just fine.

With regards to cloud backup, Altaro can send secondary copies to Azure, but that capability is reserved for the most expensive Unlimited Plus edition.

Nakivo Backup & Replication 7.3

With just 192 MB Nakivo is an even smaller download than Altaro. That, unfortunately, is the most positive thing I have to say about it.

The installer leaves critical tasks to the admin which do not even seem to be documented:

  • WS-Man needs to be enabled over the network and firewall rules need to be added
  • The installer creates a self-signed certificate that browsers reject

Once you are past the mandatory manual WS-Man configuration and try to access the product’s console in IE (IE being preinstalled on Windows Server) you are greeted with a recommendation not to use IE.

When I tried to add the SMB backup target for which I had a username, a password, and a UNC path, specifying the username turned out to be more of a challenge than I had anticipated:

  • Just the username did not work
  • “.\username” did not work either
  • “user\user” finally did the trick

Again, that does not seem to be documented. Altaro and Veeam did not have any issues authenticating to the share.

Connecting to a second share did not work at all, not even with the “user\user” hack. I verified the credentials were OK by mapping a drive on the command line, which was no problem, of course.

As for the UI, I found Nakivo’s console to be slow and badly designed.

Previous versions of Nakivo required SMB1, which the current 7.3 finally got rid of. Now SMB2 is used instead.

Conclusion

Altaro is a nice product, easy to use, yet very capable. Give it a try if you plan to store your backups on a Windows machine. Veeam is the workhorse whose power comes with quite a bit of complexity. Nakivo cannot be recommended at this point.

Direct backup to the cloud is a feature I would like to see in the future. Today, cloud storage can only be used as a secondary backup target.

The post Hyper-V Backup – Products & Options appeared first on Helge Klein.