Community

Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2062 of /home/scslive/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2072 of /home/scslive/public_html/includes/common.inc).

Hyper-V Backup – Products & Options

Helge Klein - Wed, 01/10/2018 - 14:14

It’s 2018, and backing up Hyper-V hosts is solved, right? I certainly expected it to be when I took on the task of finding a simple and reliable way to back up three hosts colocated in a large datacenter. This article summarizes my journey and the findings resulting from it.

Backup Products

It became immediately obvious that Windows Server 2016 does not come with any useful backup functionality for Hyper-V. Looking for commercial products, I found the following three candidates looking promising:

  • Altaro Backup 7.5
  • Nakivo Backup & Replication 7.3
  • Veeam Backup & Replication 9.5 Update 2

There are other products out there, of course, but looking at their websites none of them seemed enticing enough for me to include them in a trial.

My Requirements

My expectations for a Hyper-V backup product are moderate enough, I believe. I do not want to become a full-time backup administrator. I have precious little time as it is. The products we use for our IT need to “just work”. At least, that is the theory.

Requirement #1: Good UX

First of all, I need a backup product to be easy to install and use. The UI should ideally be self-documenting so it can be used without studying manuals, FAQs and KB articles.

Requirement #2: Reliability

Secondly, a backup software needs to do its job, reliably, every single day. Backups are fail-safes. It cannot have the fail-safes fail.

Requirement #3: Efficiency

This is more of a nice-to-have than a hard requirement, but it would sure be nice to have a product that does its job swiftly while efficiently deduplicating the backup storage. Also, a light agent and a small installation footprint are always a plus.

Where to Back Up To? Backup Target Option #1: SMB Share

I intended to store the backups on an SMB file share which the company that is hosting the servers is offering at low cost. I had used that type of SMB share for manual backup jobs for several years.

Backup Target Option #2: Dedicated Backup Server

Some products offer to host the backups on a dedicated backup server. That was never an option for me. Our environment is relatively small, and standing up and managing another physical server just to host backups was out of the question. I was looking for a simple, elegant solution, not additional complexity.

Backup Target Option #3: Cloud Storage

Once you think about it, this seems to be the obvious choice, at least for Hyper-V hosts located in a datacenter with high-speed internet connectivity (like ours). Storing (encrypted) backups in Amazon S3 or Azure Storage solves multiple problems at once: the backups are located in a physically different site, vastly improving disaster recoverability while at the same time reducing the dependency on one hosting provider. With backups stored in the cloud moving VMs between datacenters becomes so much easier.

The reason why I did not choose this option is as sad as it is simple: none of the products support cloud storage as primary backup targets. This may (and probably will) change in future versions, but today not even Microsoft’s Azure Backup Server has the capability to send backups directly to Azure. As the docs state:

In the current architecture of Azure Backup Server, the Azure Backup vault holds the second copy of the data while the local storage holds the first (and mandatory) backup copy.

Notes From My Product Evaluations

Following are the notes from my trial installations of the three products mentioned earlier, two of which I used for several weeks each before writing this.

Veeam Backup & Replication 9.5 Update 2 First Attempt

This was the first product I tried, simply because Veeam is the largest and most well-known brand out there. During my initial installation attempt, I was a little shocked about the huge 2 GB download. When I was finally presented with the product’s console, I could not find any way to add an SMB share as a backup target. As it turned out in my second attempt several weeks later, you have to locate a tiny little chevron-style button placed near the bottom of the screen:

Once you manage to find and click that, a dialog pops up that lets you enable the “backup infrastructure” tab. Why that is not enabled by default is beyond me. Veeam could probably double their sales by that simple change.

As it stands I failed to locate the magic button during my first attempt, so I uninstalled the product, which is a tedious procedure because Veeam litters the Programs and Features window with a good half dozen entries.

Second Attempt and Verdict

Only after I had tried Altaro and Nakivo and failed did I look at Veeam again. During this second run, I located and skimmed a manual which, if printed out, would dwarf any city’s phone book. My newfound knowledge enabled me to find the magic button and configure the product optimally for our needs.

Veeam Backup & Replication is certainly powerful, but unnecessarily complex. Related settings are spread across different wizards, dialogs and other parts of the UI. However, once configured, it seems to be a reliable workhorse.

Cloud storage as the primary backup target is not supported as of version 9.5. I found a note on a blog that support for object-based storage is to come with version 10, so, hopefully, that will give us direct cloud backup.

My Recommended Veeam Configuration for Hyper-V

For optimal backup consistency with a diverse set of Windows and Linux VMs, I recommend configuring Veeam so that application-consistency is attempted first, crash-consistency second.

Application-consistent backups require Veeam to trigger VSS in the VM, so admin rights in the guest are mandatory for this to work. Sometimes VSS gets confused and fails, or you do not have admin credentials for all VMs. That is when you need the crash-consistent variant as your second option, which basically backs up a VM as if it had suffered a sudden power loss. Unfortunately, this two-phased approach is not enabled by default. To configure it, you have to dig pretty deep:

Hyper-V crash-consistency needs to be specifically enabled in a different dialog. While you are there make sure to switch on changed block tracking (CBT):

Altaro Backup 7.5

With 273 MB is Altaro is a nice small download. Installation is similarly quick. The UI is so intuitive that a product manual is not required. Altaro employs the sensible backup strategy of trying application-consistent first, crash-consistent second, by default.

All in all, I was really happy with Altaro until it started failing every single time with the error The backup location contains too many bad data blocks. Error code DEDUP_064:

This happened with two different SMB shares, in the first case after two to three weeks, in the second case after only a few days. I suspect it is caused by an incompatibility with the Linux-based SMB shares our provider offers. Altaro’s support was not able to find a solution. It would be nice if they offered a tool or script to test a backup target for compatibility (which they do not). In any case, Veeam seems to be able to work with the SMB shares just fine.

With regards to cloud backup, Altaro can send secondary copies to Azure, but that capability is reserved for the most expensive Unlimited Plus edition.

Nakivo Backup & Replication 7.3

With just 192 MB Nakivo is an even smaller download than Altaro. That, unfortunately, is the most positive thing I have to say about it.

The installer leaves critical tasks to the admin which do not even seem to be documented:

  • WS-Man needs to be enabled over the network and firewall rules need to be added
  • The installer creates a self-signed certificate that browsers reject

Once you are past the mandatory manual WS-Man configuration and try to access the product’s console in IE (IE being preinstalled on Windows Server) you are greeted with a recommendation not to use IE.

When I tried to add the SMB backup target for which I had a username, a password, and a UNC path, specifying the username turned out to be more of a challenge than I had anticipated:

  • Just the username did not work
  • “.\username” did not work either
  • “user\user” finally did the trick

Again, that does not seem to be documented. Altaro and Veeam did not have any issues authenticating to the share.

Connecting to a second share did not work at all, not even with the “user\user” hack. I verified the credentials were OK by mapping a drive on the command line, which was no problem, of course.

As for the UI, I found Nakivo’s console to be slow and badly designed.

Previous versions of Nakivo required SMB1, which the current 7.3 finally got rid of. Now SMB2 is used instead.

Conclusion

Altaro is a nice product, easy to use, yet very capable. Give it a try if you plan to store your backups on a Windows machine. Veeam is the workhorse whose power comes with quite a bit of complexity. Nakivo cannot be recommended at this point.

Direct backup to the cloud is a feature I would like to see in the future. Today, cloud storage can only be used as a secondary backup target.

The post Hyper-V Backup – Products & Options appeared first on Helge Klein.

Three Simple 2018 Information Security Resolutions

Theresa Miller - Tue, 01/09/2018 - 06:30

New Years Resolutions.  Some people love them, and some people hate them. Some stick to them throughout the year, and others have already ignored them at this point in January. That being said, I have put together three simple 2018 Information Security resolutions almost everyone can benefit from. There’s also a plethora of blog posts […]

The post Three Simple 2018 Information Security Resolutions appeared first on 24x7ITConnection.

Get latest Citrix Receiver version with PowerShell

Aaron Parker's stealthpuppy - Tue, 01/09/2018 - 01:57

I’ve previously written about deploying Citrix Receiver to Windows 10 via Intune with PowerShell. I wrote a script that will detect an installed version of Receiver and update to the latest version if it is out of date. To start with, I’ve hard-coded the current Receiver for Windows version into the script; however, that’s not necessarily the best approach, because it will require updating whenever Receiver is updated.

The Citrix Receiver download page provides a source for querying Receiver versions for all platforms, so if we parse that page, we have a source for the latest Receiver versions for all platforms.

I’ve written a script that will parse the downloads page and return the current Receiver version for each platform unless a login for that platform is required. If you’re looking to find the Receiver version for Windows, Windows LTSR, Linux, Mac etc., the script can be used to return the version number for the desired platform.

Here’s the script:

To use the script, save as Get-CitrixReceiverVersion.ps1 and run from the command line. With no parameters, it will return the version number for Citrix Receiver for Windows:

.\Get-CitrixReceiverVersion.ps1

The script returns specific platforms with the -Platform parameter. This only accepts valid values, such as ‘Windows’, ‘Mac’ and ‘Linux’ and the script will validate those values and supports tab completion.

.\Get-CitrixReceiverVersion.ps1 -Platform Mac

Here’s the script in action:

Get-CitrixReceiverVersion.ps1 returning the latest version for various Receiver platforms

I’ve written this primarily for my purposes, but perhaps there are other purposes that I’ve not yet considered. Feedback, issues and improvements to the script are welcome.

This article by Aaron Parker, Get latest Citrix Receiver version with PowerShell appeared first on Aaron Parker.

Categories: Community, Virtualisation

UK Citrix User Group 2018 – Birmingham meeting

Citrix UK User Group - Fri, 01/05/2018 - 16:00

Our 24th XL event will take place in Birmingham on 15th February 2018 Venue IET Birmingham Austin Court 80 Cambridge St B1 2NP Birmingham Agenda (hover over session title for more details) 09.00 Registration opens. Coffee and pastries 09:30 News and updates – …

Read more »

The post UK Citrix User Group 2018 – Birmingham meeting appeared first on UK Citrix User Group.

The Top Technology Predictions for 2018

Theresa Miller - Thu, 01/04/2018 - 06:30

2017 was a great year in IT bringing much change in the focus of how businesses are consuming IT.  For example, cloud for many organization is no longer if but when.  We are also now looking at data analytics, virtual reality, artificial intelligence in new ways that benefit the business and not just in ways […]

The post The Top Technology Predictions for 2018 appeared first on 24x7ITConnection.

DHCP – Activate Filter “Allow” & import MAC address from SCCM by WMI request

Archy.net - Tue, 08/29/2017 - 12:08

Hello folks,
Recently, i have post a script to interroge SCCM and find the MAC address informations. In this post, i show you how activate DHCP Filter “Allow” to protect your DHCP delivery lease to deny access to your network (i know, there is NAP or NAC but, it is a simple way to block the issuance of a DHCP lease).

Prerequirements

First, you need to create a Active Directory user and give to this account rights “DHCP Administrator”.

In SCCM console, add this users to group “Read-only Analyst” .

Activate filter “Allow” on DHCP server

Connect to your DHCP Server and open the management consoleOn the IPV4 tab, open the drop-down menu, and then select the “Filters” option and right-click the “Allow” folder and select “Enable”.

From now, the DHCP server no longer delivers leases.

On the DHCP Server, launch this script for retreive and add the MAC Address informations from SCCM Server to filter list “Allow”.

Source code   # Connection information $SiteName = "FR1" $ServerSite = "sccm" # WMI Request $ImportSCCM = $(Get-WmiObject -Class SMS_R_SYSTEM -Namespace "root\sms\site_$SiteName" -computerName $ServerSite) # Create collection $Mycoll = @() foreach ($obj in $ImportSCCM) { Write-Host $obj.NetbiosName $obj.MACAddresses $obj.OperatingSystemNameandVersion $Mydetails = "" | Select-Object PCName, MacAddress, OS If ($([String]$obj.MACAddresses) -eq "") { $Mydetails.PCName = $obj.NetbiosName $Mydetails.MacAddress = "Nul" $Mydetails.OS = $obj.OperatingSystemNameandVersion } Else { $Mydetails.PCName = $obj.NetbiosName $Mydetails.MacAddress = [String]$obj.MACAddresses -replace ":","-" $Mydetails.OS = $obj.OperatingSystemNameandVersion } $Mycoll += $Mydetails } #Add MacAddress into DHCP Filter foreach ($objects in $Mycoll) { Add-DhcpServerv4Filter -List Allow -MacAddress $objects.MacAddress -Description $objects.PCName -Confirm:$false -Force -Verbose } # Remove Obsolete entries Compare-Object $(($Mycoll | Select-Object MacAddress).MacAddress) $(Get-DhcpServerv4Filter -ComputerName $DHCPServer -List Allow | Select-Object MacAddress).MacAddress -IncludeEqual | % { if ($_.SideIndicator -eq "=>") { Remove-DhcpServerv4Filter -ComputerName $DHCPServer -MacAddress $_.InputObject -Confirm:$false -Verbose } }

When the script is finished, you can see into the management console of DHCP Server, the entries are add into the “Allow” list.

The DHCP server correctly delivers the lease of the device whose MAC Address is allowed.

 

Categories: Community, Virtualisation

SCCM – Find Devices MAC Address

Archy.net - Fri, 08/25/2017 - 12:56

Hello Folks,

This week I needed to export from SCCM, the devices name and MAC Address to a CSV file.

I need this file to create green list into DHCP server. The Green list give permission to have a lease from DHCP server. I will speak of this subject in a futur post.

To find informations on devices into SCCM, we can work with WMI Class of SCCM. This script is based on WMI request.

Source code   $SiteName = "FR1" $ServerSite = "sccm" $Mycoll = @() foreach ($obj in (Get-WmiObject -Class SMS_R_SYSTEM -Namespace "root\sms\site_$SiteName" -computerName $ServerSite)) { Write-Host $obj.NetbiosName $obj.MACAddresses $obj.OperatingSystemNameandVersion $Mydetails = "" | Select-Object PCName, MacAddress, OS If ($([String]$obj.MACAddresses) -eq "") { $Mydetails.PCName = $obj.NetbiosName $Mydetails.MacAddress = "Nul" $Mydetails.OS = $obj.OperatingSystemNameandVersion } Else { $Mydetails.PCName = $obj.NetbiosName $Mydetails.MacAddress = [String]$obj.MACAddresses $Mydetails.OS = $obj.OperatingSystemNameandVersion } $Mycoll += $Mydetails } $Mycoll | Out-GridView

 

 

Categories: Community, Virtualisation

FCUGC – 3ème edition

Archy.net - Sat, 04/29/2017 - 07:54

FCUGC Rencontre du 3eme type !

Pour sa troisième rencontre, le French Citrix User Group Community (FCUGC) lâche l’ancre au 47 Quai de la Tournelle 75005 Paris – Péniche Henjo – pour une soirée de discussion sur des sujets autour de Citrix avec le partenaire de cet évènement, NUTANIX

 

La soirée sera composée de 3 présentations suivies d’un échange autours de 2 ou 3 sujets comme nous avons maintenant pris l’habitude de faire.

La communauté est un excellent moyen pour se lancer dans des présentations, le public est ouvert et tous les échanges sont constructifs, si tu veux te lancer, c’est le moment ! Donc tu souhaites présenter lors de cet évènement, prends contact avec Samuel Legrand ou moi même ! (formulaire de contact, email, twitter, téléphone, bref, tu trouveras un moyen de nous joindre)

A bientôt !

Pour s’inscrire, clique sur l’image :

Encore un merci aux précédents sponsors des deux précédentes éditions :

Control UP et Activlan

 

Categories: Community, Virtualisation

Active Directory Certificate Services [Part1]

Archy.net - Mon, 04/17/2017 - 09:45

In this post, I will tell you the information to prepare for the installation of a future two-tier PKI infrastructure.

What is it AD CS Services

Active Directory Certificate Services (AD CS) provide customizable services for issuing and managing certificates that are used in software security systems that use public key technologies.

Features of AD CS services
  • Certification authorities: Root and secondary certification authorities are used to issue certificates to users, computers, and services, as well as to manage the validity of certificates.
  • Registration of certification authority via the Web: registration via the Web allows users to connect to a certification authority using a Web browser to request certificates and retrieve revocation lists from Certificates.
  • Online Responder: The Online Responder service accepts revocation status requests for specific certificates, evaluates the status of these certificates, and returns a signed response containing the requested information about the certificate status.

The applications supported by the AD CS services include the S/MIME (Secure/Multipurpose Internet Mail Extensions) extensions, secure wireless networks, virtual private networks (VPNs), Internet Protocol security (IPSEC), the EFS files, smart card logon, SSL/TLS (Secure Socket layer/Transport layer Security), and digital signatures.

Standards PKC Availability of infrastructure

The PKI infrastructure is separated into different components, each with its own service-level agreement (SLA).

  • Enrolment: This feature is considered non-critical within the infrastructure in terms of the use of certificates. A failed enrolment can always be postponed.
  • Revocation: This feature is critical. A compromise certificate must be revoked as quickly as possible. The SLA for this feature also depends on the status check feature due to the fact that the revocation information is given by the CRL (or the token of an OCSP responder if available).
  • Status check: Checking the status of a certificate depends on the availability of a valid CRL. This means that at minima a CRL file must be available and valid. For this, the CRL will be generated each day for a period of validity of 7 days.
What you need before start installation ?

Before start installation you need to take more informations to set your CA environnement. These settings are needed to prepare correctly your documents and your installation.

Define attributs for CA Root

First, define correctly the differents attributs of the CA Root :

Define Path CRL and AIA

Define attributs for Enterprise Subordinate CA

Define Path CRL and AIA

How to define the DN of certificates

You can help you with this array to define correctly your certificates DN :

Firewall rules

This array define the rules to activate on security firewall of your compagny.

To avoid opening all dynamic RPC ports, you can set the certificate authority’s DCOM port.

Fixed DCOM port :

If you want to set the CA server to use a static DCOM port, follow these steps :

  • Connect to the CA server with an account with local Administrator privileges
  • Open the “Component Services” MMC (DCOMCNFG)
  • In the left panel, unwind Component Services, Computers, My computer, and click DCOM Config
  • In the right pane, select “CertSrv Request” and right click and select “Properties”
  • In the “Endpoints” tab, click the “ADD” button
  • Select “Use Statistic endpoint” and add the port you want, example “49152” and, double-click OK
  • Restart the service of the Certificate Authority :
    • net stop certSvc
    • net start certsvc
  • To check the listening port, run the command “netstat-anob”, check the port linked to the “certsrv” process

 

Now that we have defined all of the prerequisites, you still have to define the infrastructure of your PKI two levels. The Root CA is not resource consuming and can be turned off at the end of the installation. It is autonomous and should not have an IP address in practice.
The point of attention comes to the intermediate CA. You must define in relation to the number of users, devices, the size of the CRL file. Whether you want to install all of your components on the same server or, if you want to separate the roles and for example install one or more IIS servers separately from the PKI.
I did not mention an element that simplifies the query of the CRL, the OCSP service. If you want to integrate this feature, you will need to set the service access URL.
We are ready to move on to the installation of our 2 tier PKI infrastructure. I would detail the installation of the components in a next post.

 

 

 

 

Categories: Community, Virtualisation

Pages

Subscribe to Spellings.net aggregator - Community