Quantcast
Channel: DexterPOSH's Blog
Viewing all 97 articles
Browse latest View live

PowerShell : Hunt CheckBox of Doom

$
0
0
I had posted a while back about the dreaded Checkbox of Doom which is a real pain in the Migration Scenarios where few AD Users might be marked as protected (Admincount = 1) but we don't really know which Group membership (marked as protected) might be causing this. 

Shout out to MVP Ace Fekay for providing his insights on the topic :)


It becomes a pain when the Groups are nested multiple levels and to determine which Portected Groups membership the User have which might be causing the Inheritance disabled (checkbox of doom).


[Update]  Fellow friend andMVP Guido Oliveira highlighted that he had come across an issue where the AdminCount was once set to 1 when the User was part of a Protected Group. Once he was removed from the Group as per the Wiki Link shared at the end the AdminCount and the Inheritance will still be disabled so this Function can hunt those accounts too.

Function is up for download @Technet : Click Here

Read below on how to use the Script and the Scenario it tackles.

Scenario

I have 2 groups named NestedGroup1 & NestedGroup2 which are nested like below in the Server Operators (Protected Group) , also they have a User xyzabc& test123 added respectively to each as shown below :





Now after explaining the Nested Scenario, I am going to explicitly remove the Inheritance from one of the User named Abdul.Yanwube, see below :



Now I did this to actually show the 2 types of accounts which can have Inheritance disabled :

  1. Protected Accounts : AD Users which are part of a Protected Group (can be nested)
  2. Non Admin Users : AD Users which might have Inheritance disabled because of Manual Error or during Migration if something broke and disabled inheritance.


Running the Function :

Note - The Function leverages the ActiveDirectory PowerShell Module (prerequisite).

Dot Source the PS1 file (got from Technet). 

. C:\Users\Dexter\Downloads\Get-ADUserWithInheritanceDisabled.ps1 #Mark the first dot at beginning

Once done read the help for the function by issuing below :
help Get-ADUserWithInheritanceDisabled -Full 

The Function uses the AD PowerShell Module to fetch the ADUsers with needed attributes and then process them.  The Function has 3 parameter sets based on  how the Get-ADUser cmdlet from the AD PS Module is invoked to fetch the Users.


  1. Specifying SamAccountName(s)
  2. Using -Filter with SearchBase and SearchScope
  3. Using -LDAPFilter with SearchBase and SearchScope

Specifying SamAccountName(s)


If you have a list of SamAccountNames dumped in a CSV/ text file or any data source and you know how to fetch it using PowerShell, then you can pipe the string array of SamAccountName to the Function and it will process them.

For Example: a test.txt file has the samaccountnames - dexterposh,test123,xyz1abc & abdul.yanwube in it. We use Get-Content to get the content and pipe it to the Function like below :
PS>Get-Content C:\temp\test.txt | Get-ADUserWithInheritanceDisabled


SamAccountname : DexterPOSH
UserPrincipalname : DexterPOSH@dex.com
IsAdmin : True
InheritanceDisabled : True
ProtectedGroup1 : CN=Schema Admins,CN=Users,DC=dex,DC=com
ProtectedGroup2 : CN=Administrators,CN=Builtin,DC=dex,DC=com
ProtectedGroup3 : CN=Enterprise Admins,CN=Users,DC=dex,DC=com
ProtectedGroup4 : CN=Domain Admins,CN=Users,DC=dex,DC=com

SamAccountname : test123
UserPrincipalname : test123@dex.com
IsAdmin : True
InheritanceDisabled : True
ProtectedGroup1 : CN=NestedGroup2,CN=Users,DC=dex,DC=com

WARNING: [PROCESS] : SamAccountName : Cannot find an object with identity: 'xyz1abc' under: 'DC=dex,DC=com'..exception
SamAccountname : Abdul.Yanwube
UserPrincipalname : Abdul.Yanwube@dex.com
IsAdmin : False
InheritanceDisabled : True

Note - The Function throws a warning if it is not able to locate a User with the account name.

Also see the output for a while and see that it reports the Scenario we had discussed earlier.

Using -Filter with SearchBase and SearchScope

Now there might be times when you want to search a particular OU in AD recursively for Users with Inheritance disabled.As the function uses Get-ADUser to retrieve the User details the -Filter (Mandatory), -SearchBase, - SearchScope parameters are the same you would use with the Get-ADUser cmdlet.
 Note - Below how the use of Base & OneLevel argument to -SearchScope parameter changes the result.
PS>Get-ADUserWithInheritanceDisabled -Filter * -SearchBase 'OU=ExchangeUsers,DC=Dex,DC=Com' -SearchScope Base

PS>Get-ADUserWithInheritanceDisabled -Filter * -SearchBase 'OU=ExchangeUsers,DC=Dex,DC=Com' -SearchScope OneLevel


SamAccountname : DexterPOSH
UserPrincipalname : DexterPOSH@dex.com
IsAdmin : True
InheritanceDisabled : True
ProtectedGroup1 : CN=Schema Admins,CN=Users,DC=dex,DC=com
ProtectedGroup2 : CN=Administrators,CN=Builtin,DC=dex,DC=com
ProtectedGroup3 : CN=Enterprise Admins,CN=Users,DC=dex,DC=com
ProtectedGroup4 : CN=Domain Admins,CN=Users,DC=dex,DC=com

 

 

 Using -LDAPFilter with SearchBase and SearchScope

If you are more comfortable to using LDAPFilter then the Function let's you use them in order to search for Users matching the criteria and processes them.
PS>Get-ADUserWithInheritanceDisabled -LDAPFilter '(&(objectCategory=person)(objectClass=user)(name=test*))' -SearchBase 'OU=ExchangeUsers,Dc=Dex,Dc=Com'


SamAccountname : test123
UserPrincipalname : test123@dex.com
IsAdmin : True
InheritanceDisabled : True
ProtectedGroup1 : CN=NestedGroup2,CN=Users,DC=dex,DC
 
The Function spits out CustomObjects which have the relevant information for the Users and as discussed in the Scenario it is able to detect the corresponding protected Groups for a User with Inheritance disabled (if any) and report them.


Below is a gist showing this in action:



If you have any suggestions on how to improve the Script then please leave a comment or contact me :)

Resources : 

Technet Wiki Article on : AdminSDHolder, Protected Groups and Security Descriptor Propagator


PowerShell + Azure : Validate ResourceGroup Tags

$
0
0

Recently been working on some DevOps stuff in Azure using Python & PowerShell, so would be doing few posts revolving around that.

Why I have added the below pic ?
Python is what I have been picking from the Dev world (currently) and PowerShell is what I have picked from the Ops world.


In Azure Resource Manager, one can add tags to the Resource Groups (check out the new Azure Portal to explore ResourceGroups ). Last week had to script a way to check that there is a Tag on the resource group with a valid set of values. Python Azure SDK doesn't yet support Azure Resource Manager operations so had to turn to the Ops side (PowerShell way).

Don't worry if you have no idea what a tag is, the validation code is pretty neat.


For Example - the Resource Group should have a tag named "Environment" on it with the valid values of "Dev","QA"& "Prod" .


There can be other tags on it but we are looking only for the existence of this tag & values.

Let's get started.
  1. Since Azure Resource Manager cmdlets doesn't support Certificate based authentication. We have to use Azure AD here. First step is to use below cmdlet to add your account using which you login to the Azure Portal.
  2. Add-AzureAccount

  3. Once authenticated , Switch to using Azure resource manager cmdlets using the below cmdlet. So that we get the AzureResourceManager Module loaded.
  4. Switch-AzureMode-Name AzureResourceManager

  5. Now you can see in the PowerShell host that the AzureResourceManager Module is loaded:
  6. PS>Get-Module

    ModuleType Version Name ExportedCommands
    ---------- ------- ---- ----------------
    Manifest 0.9.1 AzureResourceManager {Add-AlertRule, Add-AutoscaleSetting, Add-AzureAccount, Ad...

  7.  Use the cmdlet Get-AzureResourceGroup to get Resource Groups and store them in a variable for later processing.
  8. PS>$ResourceGroups = Get-AzureResourceGroup
  9. Now we can filter Resource Groups which have tags property like below :
    PS>$ResourceGroups | where tags


    ResourceGroupName : DexterPOSHCloudService
    Location : southeastasia
    ProvisioningState : Succeeded
    Tags :
    Name Value
    =========== =========
    Environment QA
    TestKey TestValue

    ResourceId : /subscriptions/4359ee69-61ce-430c-b885-4083b2656de7/resourceGroups/DexterPOSHCloudService

    ResourceGroupName : dexterseg
    Location : southeastasia
    ProvisioningState : Succeeded
    Tags :
    Name Value
    =========== =======
    Environment Testing

    ResourceId : /subscriptions/4359ee69-61ce-430c-b885-4083b2656de7/resourceGroups/dexterseg


    Maybe those who don't have the Tags property we can throw a warning but I leave the Scripting logic for you to build upon.
  10. Now out of the 2 Resource groups above one has a valid Environment Tag of value "QA" (in green) but the other one has an invalid tag value of "Testing"  (in yellow).
    Before we start validating the Values we need to check if the Tag contains the desired Environment tag, for this we can use the contains() method.

    But if you look closely you will find something strange with the tags property :
    PS>$ResourceGroups[0].tags

    Name Value
    ---- -----
    Value LAB
    Name Environment
    Value TestValue
    Name TestKey

    Tags property is a hashtable but the keys are "Name"& "Value" literal. Not very intuitive as I thought I would be getting the Environment as one of the key for the hashtable returned, submitted a issue at the GitHub repo for this.
    Until it is fixed we can check if at all our Environment tag is present by using the ContainsValue method on the Hashtable, like below :
    PS>$ResourceGroups[0].tags[0].ContainsValue('Environment')
    True

    Now if the tag has the Environment as the value for 'Name' key then evidently the value for 'Value' key will be the one we are seeking.
    PS>$ResourceGroups[0].tags[0]['Name']
    Environment
    PS>$ResourceGroups[0].tags[0]['Value']
    QA

    Wow ! so much work am already dozing off :P
  11. It appears validating the value is one of our permitted values ('Dev','QA','Prod') is relatively easy using the validateset() parameter attribute, see the below:
    PS>[Validateset('DEV','QA','PROD')]$testme = $ResourceGroups[0].tags[0]['Value']
    PS>[Validateset('DEV','QA','PROD')]$testme = $ResourceGroups[1].tags[0]['Value']
    The attribute cannot be added because variable testme with value LAB would no longer be valid.
    At line:1 char:1
    + [Validateset('DEV','QA','PROD')]$testme = $ResourceGroups[0].tags[0]['Value']
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : MetadataError: (:) [], ValidationMetadataException
    + FullyQualifiedErrorId : ValidateSetFailure


    When we decorate the $testme variable with the ValidateSet() attribute and perform the assignment it will throw an exception if the value is not in the set (note that the second resource group in step 5 doesn't have a valid Environment tag), which we can catch later and display a message saying that the Environment tag doesn't have a valid value.

Below is the sample code for one to build upon :
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
Add-AzureAccount
Switch-AzureMode -Name AzureResourceManager

$ResourceGroups = Get-AzureResourceGroup | where Tags

foreach ($ResourceGroup in $ResourceGroups) {   
  if ($ResourceGroup.tags.values.Contains('Environment')) {
    Write-Verbose -Message "$($ResourceGroup.ResourceGroupName) Environment tag not found" -Verbose
        foreach ($Tag in $ResourceGroup.Tags) {
         
            if ($Tag.ContainsValue('Environment')) {
                TRY {
                    [validateset('DEV','QA','PROD')]$testme = $Tag['Value']
                }
                CATCH [System.Management.Automation.ValidationMetadataException] {
                    Write-Error -Message "Environment tag doesn't contain a valid value -->('DEV','QA','PROD')"
                }
            }
        }
    }
    else {
        Write-Warning -Message "$($ResourceGroup.ResourceGroupName) Environment tag not found"
    }     
}

Thanks for reading and that is it for today's post.
~Dex~



PowerShell + Pester + Jenkins : Journey to Continuous Integration

$
0
0

Continuous Integration, huh ?

Simply put CI is running all the tests (against your code, system etc) frequently in order to perform code validation and see everything is integrating well with each other. For Example - If I check in Code then CI runs all the tests to see if the commit did break anything.

Why are we doing this CI stuff anyway ?

To check if something failed on regular basis, so that it is easy to fix it at the earlier stage.


Note
- I am a mere mortal and follower of DevOps (much broader term) but have started to appreciate the simplicity all these concepts bring in. Don't mistake me for an expert here ;)

A little background on why I explored using Jenkins as the CI solution, the Project recently I started working on requires me to code in Python/ PowerShell and the team already uses Jenkins for other projects in Python, Java, Ruby etc so we needed to integrate running of Pester tests from Jenkins for our PowerShell codebase.


With all the CI stuff cleared out, time to move on towards the task at hand for this post.
In this post, I have a Jenkins Server installed on an Azure VM. The installation is pretty straightforward and I was drafting a post from scratch on this but then stumbled across a tweet by Matthew Hodgkins and his posts are superb job , Check out Resources Section at the bottom for link to his posts.

Below is the tweet :





So moving on this post will only revolve around integrating Pester with Jenkins-

We need to perform few house keeping steps to make the Pester integration easier for us.

  1. Install PowerShell Plugin& Nunit Plugin. Click on Manage Jenkins> Manage Plugins > Under Available tab , search 'PowerShell' , 'Nunit' respectively and install them :

  2. Once done come back to the Home page and click 'New Item' and create a free style project.

  3.  Your new project should appear in the dashboard now, hover over it and click on 'Configure'. Notice that

  4. For this post I am gonna dump a PS1 file and associated Pester tests in a directory and add a build step which runs Pester tests. One can play and integrate their Version control tools like Git, Subversion etc too with Jenkins. So Let's configure our new Project to use a Folder say E:\PowerShell_Project now. Below is a gif to show that :

  5.  Now in the same page above scroll down to Build steps and add a simple build action to show you a possible gotcha.Note - We added the PowerShell Plugin to Jenkins to get the option to add build step using PowerShell natively.
    Let's add few test PS statments to it like :
    $env:UserName
    Get-Module Pester
    Get-Location


    Note - You can use $env:PSModulePath in above code (or normal PS console) snippet to see which all folders PowerShell looks for the Module disocvery.


  6.  Click on "Build Now"forthe project to see a possible pitfall.

  7.  Below is the console output of the above build run :
    Started by user anonymous
    Building in workspace E:\PowerShell_Project
    [PowerShell_Project] $ powershell.exe "& 'C:\Windows\TEMP\hudson3182214357221040941.ps1'"
    DEXCLIENT$

    Path
    ----
    E:\PowerShell_Project


    Finished: SUCCESS
    Few important things to note here are  :
    • When running PowerShell code as part of a build step be informed of which User account is being used. For my case I see it using the System Account (my .machine name is DexClient)
    • Based on above check if the Module is discoverable to PowerShell, notice that the Get-Module Pester in the build step return nothing. (Pester was placed in my User's Modules folder).
    • If you are using a Custom workspace (step 4) the default location for PowerShell host that runs our code (added in build step) is set to that Folder.
    • Check out how Jenkins runs the PowerShell code specified in the build step.
      powershell.exe
       
  8. Now one can definitely configure Jenkins to handle this in a better way but that would make my post lengthy. Quick fix here is to load the Pester Module explicitly with full path. For example : Import-Module 'Import-Module 'C:\Users\dexterposh\WindowsPowerShell\Modules\Pester\pester.psd1'

  9. Once you have taken care of how to load the Module, you can add another build step for modify the existing one to run Pester tests.I modified the existing build step to look like below :Import-Module 'C:\Users\dexterposh\WindowsPowerShell\Modules\Pester\pester.psd1'
    Invoke-Pester -EnableExit -OutputFile PSTestresults.xml -OutputFormat NUnitXml


    Take a note of the parameters used here -OutputFile , -OutputFormat and -EnableExit switch.
    Pester is really awesome as it supports integrating with almost all CI Solutions out there.
    Read morehere
  10.  As a last step , We will be adding a post-build step to consume our PSTestresults.xml by the Nunit Module. Below is the last gist showing the test run :



Resources :

Matthew Hodgkins - Post on installing Jenkins and Automation using PowerShell
https://www.hodgkins.net.au/powershell/automating-with-jenkins-and-powershell-on-windows-part-1/


https://github.com/pester/Pester#continuous-integration-with-pesterhttps://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins

PowerShell MVP 2015

$
0
0
I received the official notification yesterday that my PowerShell MVP award has been renewed !!





In this post, I try to look back at my journey as a PowerShell MVP :)

This award is dedicated to PSBUG which feels like a family to me know.

Initially Overwhelmed

At first when I got the award I was overwhelmed, to be in the elite group in the World is something. I felt a tremendous pressure as now I was bestowed upon by such a huge responsibility. Initial few months I was under the MVP vibe, People recognized me wherever I went. I got a chance to introduce myself as a MVP.
Below is one of the pic from the User Group meet :

Troubles

But soon the dust settled and I realized that being a MVP at some point I stopped enjoying my work with PowerShell, it was more of a responsibility now.
After few weeks of pondering & meditating (I do, no kidding), I realized that the award is a recognition for the last year contributions and the very reason for it is that I enjoy learning and sharing it with the community.


MVP Open Day - Eye Opener

I went in for MVP Open Day at Bangalore and had time to hang out with MVPs from all over India, seeing really passionate people talking Tech all the time was an amazing experience and I understood that the secret to it all is to "Enjoy" and not be so hard on myself . Keep doing what I enjoy :)

The best part of the Open day was talking at lengths with Ravi Sir & Aman Sir.

From Left to Right : Ravikanth Sir, Aman Sir & me (#3 PS MVPs from India).

Inspiration Source - never runs dry

PSBUG community has been a great source of motivation and inspiration all along. Some of the amazing people in the Industry come together and talk Technology on a monthly basis, keeps the fire going.

Many people don't understand why to go and meet in person when you can watch tutorials online. Apart from the vast amount of knowledge you carry home , below are few which I can think of now :

  •  First you network with people who tackle real world problems and these interactions come in handy when needed.
  • Second is you can get ideas/ opinion on any Script/ Project you are working on from the Community. (Most of my last year posts came out of some cool ideas from the community)
  • Third, We don't do serious boring stuff at these meets. We crack jokes and share our IT stories often.

In my opinion, we all do the normal day to day work and get paid at month's end for it. Where is fun in that ? Once a month one can take some time off and get batteries charged.

PowerShell + Azure + Python : Use Project Custom Settings

$
0
0

Background

First to set up the background [bit reaching on the Dev side] on the post, quick introduction to what is an Azure Cloud service along with some terms Dev like to throw around:

Cloud Service :
PaaS offering , running VMs on Microsoft Azure. You have control over these VMs as you can remote into them and customize them to run your apps. A typical cloud service contains:

  • Web Role - Windows Server running IIS on top of it.
  • Worker Role - Windows Server.
Now using Visual Studio one can configure the Cloud Service as per ones need (Check Resources at the bottom). There are typically 2 files in your project definition which need tweaking (source : MSDN - link in Resources section) :


  • ServiceDefinition.csdef  : The service definition file defines the runtime settings for your cloud service including what roles are required, endpoints, and virtual machine size. None of the data stored in this file can be changed when your role is running.
  • ServiceConfiguration.cscfg : The service configuration file configures how many instances of a role are run and the values of the settings defined for a role. The data stored in this file can be changed while your role is running.







A full post on how to use Visual Studio to deploy a cloud service is out of scope for this post and me ;)

Task at hand

We were working on a Python projectwhich will run on top of Azure utilizing cloud services (web and worker role). We had to customize our worker role a bit using the custom settings that could be defined for a Cloud project in Visual Studio. 

The customization needed us to read the custom settings defined in the Service Configuration file for the Azure Worker role and then consume it as per some logic.

The link at MSDN shows how to do it in C#, so I tried to port it to PowerShell.
It is relatively easy if you have been working with PS for a while.


Steps:


  1. Create a new Project in Visual Studio.

  2. Select Python Azure Cloud Service template to start with (need Python Azure SDK installed).

  3. After you create the Project from the template, it will ask you to select roles for your Cloud Service. I have added a Web & Worker role, this depends on your project. After that it asks you to select a Python environment, I chose a virtual environment for my Python app again this depends on your project.


  4. Now let's add a custom setting to our Worker role. Right click the Worker role > Properties. It will open up a configuration page like below:

  5. Now go to 'Settings' and click on 'Add Settings' button; go ahead and add the custom setting.



    Note - Adding a custom setting above will make an entry in the ServiceConfiguration.*.csfg files , see below :

  6. Before moving further and showing you how to access the custom setting in your code, it is important to understand the role PowerShell plays in configuring a Role.

    If you notice there is a bin directory under your each Role which contains PS scripts which configure your role e.g installing webpi, windows features etc and also take notice of a ps.cmd file which invokes these PowerShell script as a startup task.



    Take a look at the ServiceDefinition.csdef (which contains the runtime settings for my cloud service) and notice that it creates a startup task for the role.



    Below is the gist showing the ps.cmd batch file which calls our PowerShell Script, you can always modify it to fit the custom requirements you have, leaving up to you to direct Verbose stream to a log file (verbose stream used later):
  7. Now to the final piece in the puzzle , How to access the custom setting value and use it while configuring the Worker Role ?
    Well the example is already provided at the MSDN, Click here

    PowerShell to the rescue. Since PowerShell Script is already being used to configure a cloud service. One can put the extra few lines of code into the Script named ConfigureCloudService.ps1 to access the custom setting and make decisions or perform any action based on the value. You could also add another script and get it called from the ps.cmd of configurecloudservice.ps1 (you know how it works already).

    Easiest way is to load the DLL and then simply call the static method named GetConfigurationSettingValue on the RoleEnvironment Class.

I think it doesn't get easier than this, PowerShell gives us the capability to tap into the .NET framework, as a System Admin working on Microsoft realm it makes me more productive. Have you boarded the PS bandwagon yet ?

Resources:

Configuring an Azure Project
https://msdn.microsoft.com/en-us/library/azure/ee405486.aspx

How to: Configure the Roles for an Azure Cloud Service with Visual Studio
https://msdn.microsoft.com/en-us/library/azure/hh369931.aspx

PowerShell + SCCM 2012 : Create Packages & Programs

$
0
0

It has been a while, since I chartered the waters of WMI and Configuration Manager, so pardon any silly mistakes made. One of my friend from PSBUG asked me few questions revolving around creating packages & programs in ConfigMgr using PowerShell.

Every ConfigMgr admin knows that new Application model has been introduced in ConfigMgr 12 but the Packages are here to stay for a while. Packages and Programs are ideal for deploying Scripts (one time or reoccurring ones) and better suited for deploying apps during OSD (heard this one).

ConfigMgr ideally has 3 ways of working with it and below is the pic which says it all :




The post is broken up in 3 parts (based on how you use ConfigMgr):

  1. GUI Way - Doing this to show background on how we do it manually.
  2. Cmdlet Way - using the CM cmdlets to create the package and program
  3. WMI Way - exploring WMI to do the same.



GUI Way :

I believe that doing things for few first times the GUI way helps understand and grasp the process but moving on we should try to automate repetitive tasks. Below is an animated gif showing how to create a minimalist Package & Program (for 7-zip) in ConfigMgr Console :



Cmdlet Way :

Using the cmdlets is straight forward but for sake of people who are starting new with PowerShell way of managing ConfigMgr. Below are the detailed steps.

Import the ConfigMgr Module. You should have ConfigMgr cmdlet library installed on your box now
PS>Import-Module -Name ConfigurationManager

Once done, the next step is discover the cmdlets. How you ask 
PS>Get-Command -Noun CMPackage -Module ConfigurationManager

CommandType Name ModuleName
----------- ---- ----------
Cmdlet Export-CMPackage ConfigurationManager
Cmdlet Get-CMPackage ConfigurationManager
Cmdlet Import-CMPackage ConfigurationManager
Cmdlet New-CMPackage ConfigurationManager
Cmdlet Remove-CMPackage ConfigurationManager
Cmdlet Set-CMPackage ConfigurationManager

Now go ahead and read the help for the cmdlet New-CMPackage to understand what I will be doing next. Create a new Package:
PS>New-CMPackage –Name “7Zip – PS Way” –Path "\\dexsccm\Packages\7-zip\v9.20"

If one looks closely at the syntax of the New-CMPackage file they will immediately notice that the cmdlet doesn't let you set whole lot of options in the package you just created. See below the different parameter sets for the cmdlet:
PS>Get-Command New-CMPackage -Syntax

New-CMPackage -Name [-Description ] [-Manufacturer ] [-Language ] [-Version ] [-Path ] [-DisableWildcardHandling] [-ForceWildcardHandling] [-WhatIf] [-Confirm] [commonparameters]

New-CMPackage -FromDefinition -PackageDefinitionName -SourceFileType -SourceFolderPathType -SourceFolderPath [-DisableWildcardHandling] [-ForceWildcardHandling] [-WhatIf] [-Confirm] commonparameters]

New-CMPackage -FromDefinition -PackagePath -PackageNoSourceFile [-DisableWildcardHandling] [-ForceWildcardHandling] [-WhatIf] [-Confirm] [commonparameters]

New-CMPackage -FromDefinition -PackagePath -SourceFileType -SourceFolderPathType -SourceFolderPath [-DisableWildcardHandling] [-ForceWildcardHandling] [-WhatIf]
[-Confirm] [commonparameters]

New-CMPackage -FromDefinition -PackageDefinitionName -PackageNoSourceFile [-DisableWildcardHandling] [-ForceWildcardHandling] [-WhatIf] [-Confirm] [commonparameters]

So how does one Set all those properties for a Package via PowerShell ??
Go ahead and read the help for Set-CMPackage cmdle and you will know this is the cmdlet which will do the rest customization needed for the Package created. Suppose I want to enable binary differential replication for this package along with setting the distribution priority to high for this package, use below :
PS>Set-CMPackage -Name “7Zip – PS Way” -EnableBinaryDeltaReplication
PS>Set-CMPackage -Name “7Zip – PS Way” -DistributionPriority High

Did you notice above, I had to use the Set-CMPackage cmdlet twice, Why ?
Hint - Check what are Parameter sets for a cmdlet in PowerShell.

Moving on, Now time to create the Program (standard) for the package which will install the 7zip package for us. The cmdlet is New-CMProgram, if you still don't know how to figure that out read the help for Get-Command ;)

Let's create the Program:

PS>New-CMProgram -PackageName “7Zip – PS Way” -StandardProgramName "7zip PS Install - Program" -CommandLine "msiexec.exe /I 7z920-x64.msi /quiet /norestart"

Now you can configure a lot of options for the program while creating it or you can also use Set-CMProgram to configure them later, for example I am setting the run type for the above standard program created as hidden type :

Set-CMProgram -PackageName “7Zip – PS Way” -StandardProgramName "7zip PS Install - Program" -StandardProgram -RunType Hidden

One can play with the Set-CMProgram to tweak the program settings as per need, there are a whole lot of the Parameters and switches to play with this cmdlet.

Once the Package and Program have been created it is time to distribute them to the DP Groups or DP. The cmdlet is Start-CMContentDistribution.
Start-CMContentDistribution -PackageName “7Zip – PS Way” -DistributionPointGroupName DexLabDPGroup 


WMI Way :

Let's get to the more adventurous way of creating the Packages & Programs using WMI.
Fair Warning that this is a more complex way and if you don't understand how WMI works then my advice would be to stick to the cmdlet way.

Start with creating a WMI Instance of SMS_Package class, supply the Package name and the PkgSourcePath while creating the instance.
New-CimInstance -ClassName SMS_Package -Property @{'Name'='7zip - WMI Way';'PkgSourcePath'="\\dexsccm\Pa
ages\7-zip\v9.20"} -Namespace Root/SMS/site_DEX


ActionInProgress : 1
AlternateContentProviders :
Description :
ExtendedData :
ExtendedDataSize : 0
ForcedDisconnectDelay : 5
ForcedDisconnectEnabled : False
ForcedDisconnectNumRetries : 2
Icon :
IconSize : 0
IgnoreAddressSchedule : False
ISVData :
ISVDataSize : 0
IsVersionCompatible :
Language :
LastRefreshTime : 4/10/1970 6:35:00 AM
LocalizedCategoryInstanceNames : {}
Manufacturer :
MIFFilename :
MIFName :
MIFPublisher :
MIFVersion :
Name : 7zip - WMI Way
NumOfPrograms : 0
PackageID : DEX00017
PackageSize : 0
PackageType : 0
PkgFlags : 0
PkgSourceFlag : 1
PkgSourcePath : \\dexsccm\Packages\7-zip\v9.20
PreferredAddressType :
------ Snipped -------

There are attributes or properties which you can set on a WMI Instance later after creation, but you need to read the Class documentation for properties with Read/Write access type.

Now, if we look at the Package created, we will soon notice that the PkgSourceFlag is set to 1 (default value - 
STORAGE_NO_SOURCE. The program does not use source files.), Check the  documentation  and you will realize you need to set it to 2 (STORAGE_DIRECT). With the value of 1 set for PkgSourceFlag you will see the below in the properties for the Package.


So let's get to it now. First get the CIMInstance stored in a variable and then use Set-CIMInstance to set the property PkgSourceFlag on it and verify the changes. Below is the code and gif in action (it shows green screen for the code executed)  :
# get the CIM Instance stored in a variable
$package = Get-CimInstance -ClassName SMS_Package -Filter "Name='7zip - WMI Way'" -Namespace root/SMS/site_DEX

# set the PkgSourceFlag on the CIM Instance
Set-CimInstance -InputObject $Package -Property @{'PkgSourceFlag'=[uint32]2}


Let's move ahead and create a new Program to install the Package using WMI. The WMI class tp focus on is SMS_Program. Before creating the WMI/CIM Instance reading the documentation of the classes is must to avoid any surprises.

The doc for the SMS_Program in Remarks lists below :

A program is always associated with a parent package and typically represents the installation program for the package. Note that more than one program can be associated with the same package. The application uses the PackageID property to make this association. Your application cannot change this property after the SMS_Program object is created. To associate the program with a different package, the application must delete the object and create a new object with a new PackageID value.

As mentioned above, we need PackageID for our Package in order to associate a Program t o it. If you are following this post then the variable $Package already has the PackageID property which we can dot reference and use.

Below is the code snippet which I used to create a new Program for the Package :

$ProgramHash3 = @{
PackageID=$($package.PackageID);
ProgramName='7zip WMI Install - Program';
CommandLine='msiexec.exe /I 7z920-x64.msi /quiet /norestart'

}
New-CimInstance -ClassName SMS_Program -Namespace Root/SMS/site_DEX -Property $ProgramHash3


Note - Found a bug at the MSDN SMS_Program documentation which says that the CommandLine property is not a Key Qualifier. Below is what the doc says :

CommandLine

Data type: String
Access type: Read/Write
Qualifiers: [ResID(904), ResDLL("SMS_RSTT.dll")]
The command line that runs when the program is started. The default value is "".

But while trying to create a new SMS_Program instance, I realized that one has to explicitly pass this while creating the Object and moreover this can't be an empty string too. See the below GIF :

Voila, Now you can read the documentation of the SMS_Program class and try using the Set-CIMInstance to set some of the writable attributes on the Object. As an exercise leaving the Content distribution using WMI here, if I remember correctly I did cover it in one of the earlier posts (not sure which one though).

Resources:

My PowerShell + ConfigMgr Posts collection (quite a few):
http://www.dexterposh.com/p/collection-of-all-my-configmgr.html

Configuration Manager PowerShell Tuesdays: Creating and Distributing a Package / Program
http://blogs.technet.com/b/neilp/archive/2013/01/15/configuration-manager-sp1-powershell-tuesday-creating-and-distribution-a-package-program.aspx


SMS_Package Class:
https://msdn.microsoft.com/en-us/library/cc144361.aspx

PowerShell (Bangalore | Hyderabad ) UG - First Hangout

$
0
0
Recently we have started with the hangouts covering the basics of PowerShell.

The first hangout was aired successfully on 25th August 2014and is now available on the "PowerShell Bangalore User Group" YouTube channel, 

We talked a little bit on the background of Windows PowerShell and how it makes an ITPro's life awesome and why is it very important to get started with PowerShell.


The video can be found below:




We learned a few lessons on the feedback from community on how to improve these in future.

So hopefully we will improve in future :)
 

Cheers !

upcoming speaking engagements

$
0
0
Recently started working at AirWatch by VMWare and been busy learning the new Mobile technologies. 

Not been able to do much on the PowerShell and ConfigMgr side but I did few things in Azure with PowerShell (did build my LAB up there).

So will be speaking on this very topic at the Microsoft Community Day event on 23rd August . Below is the eventbrite link to register



Later in September will be speaking at the Microsoft Event on "Transforming the Datacenter" at Bangalore, India. Below is the link for that (limited seats):

https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032592541&culture=en-IN


Also planning a few hangouts for the PowerShell Bangalore User Group to help new people embrace the Shell. :)



PSBUG : Let's Automate together

$
0
0
I made this short video clip (powered by PowToon) to show the essence of our PowerShell Bangalore User Group.




I got the inspiration of doing these short video clips after seeing the session at August SQLBangalore UG Meet by Amit Banerjee (MSFT PFE)



Little about PSBUG

We are a bunch of cool PowerShell folks who help fellow ITPros onboard the PowerShell awesomeness club. If you are just starting out with PowerShell don't worry , we will help you out as a community :) .

If you are a PowerShell expert then come speak for us and spread the word.



The first name which comes to mind when speaking of PSBUG is of Ravikanth Sir (PowerShell MVP) , who has been a constant inspiration to us folks here. Ravi Sir has already written an awesome post about our story at PowerShell Magazine -> here 

Big shout out to all the cool PS-Bugs out there -->  Manoj (PowerShell MVP),Harshul, Pradeep, Manas, Vinith, Hemanth, Karthikeyan, Anirban and many more who show up on lazy weekends ;)


Hope we reach out to more guys out there and help them embrace the shell :)

P.S. - We have "Community day" coming up on 23rd August 2014, if you are around then do drop by.

PowerShell + Azure : Deploy a DC

$
0
0
Recently my laptop got stolen and that gave me a push to build my lab on Azure. I tweeted this and got an awesome reply by Jim Christopher[PowerShell MVP] :



Thanks to my friend Fenil Shah who lend me his laptop to try out Azure.
Cheers to having awesome friends :)


I thought it would be better if I put my notes as a post. These are entirely for my reference ;) 

The best posts around Azure + PowerShell are by Michael Washam which can be found on his blog here.

My Action plan is to configure a ServerCore Server 2012 R2 machine running Active Directory for this post from scratch, I don't have anything right now on my azure account.

Below are the steps:


  1. Sign Up for Azure (Free Trial is available)
  2. Install Azure PowerShell Module & Configure your subscription
  3. Create a Virtual Net for your LAB
  4. Deploy the VM
  5. Connecting to VM using PSRemoting
  6. Add a new Data Disk to VM
  7. Install ADDS and a new domain.
Steps 1-3 are one time activity, next time you want to spin a VM then no need to do these.

Sign Up for Azure

Go to https://azure.microsoft.com/en-us/ to sign up for a free trial of azure.
One has to supply their Credit Card / Debit Card information for verification which will deduct $1 (this is refunded back..don't worry you misers :D ).


Note - There is a credit limit of $200 in free trail and by default your subscription won't go above this limit, so be assured. 


Install Azure PowerShell Module & Configure your Subscription

There are very good articles below which describes this step:



Following the above two articles below is what I did:

001
002
003
004
005
006
007
008
009
# Get the Settings file
Get-AzurePublishSettingsFile

#Import the file
Import-AzurePublishSettingsFile -PublishSettingsFile "C:\Temp\Visual Studio Ultimate with MSDN-7-19-2014-credentials.publishsettings"

#Remove the Settings once imported
Remove-item "C:\Temp\Visual Studio Ultimate with MSDN-7-19-2014-credentials.publishsettings"



Once you have the settings file imported, you can remove it and then you can see that the Subscription information has been imported successfully using


001
002
003

#get the Subscription details
Get-AzureSubscription

After one has imported the Subscription information , one has to select the Subscription in order to use it , below is what I did [Note - I have only one subscription so I used (Get-AzureSubscription).SubscriptionName below ] .


001
Select-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName

Now to verify that this is the Subscription my Azure cmdlets will run against run the below and it should show your default Subscription details:


001
Get-AzureSubscription -Default

At this point we need to have a storage account before proceeding further as this is where your data (VM's VHD etc ) will be stored. I am going to create a storage account with the name "dexterposhstorage" (note all lowercase letters and numbers allowed).


001
002
003
004
005
006
#create the Storage Account
New-AzureStorageAccount -Location "Southeast Asia" -StorageAccountName "dexterposhstorage" -Label "DexLAB" -Description "Storage Account for my LABs" -Verbose

#Turn off the Geo Replication...am just using it for my lab
Set-AzureStorageAccount -StorageAccountName dexterposhstorage -GeoReplicationEnabled $false -Verbose

While doing this if you get an error like below:

New-AzureStorageAccount : Specified argument was out of the range of valid values.

Then probably the name you choose for the storage account is already taken or it doesn't adhere to the naming standards (only lowercase letters and numbers).

Once you have the storage account created, set it as the current storage account to be used for your default subscription (as you can create many storage account but only use one at a time )


001
002
003
#set your storage account
Set-AzureSubscription -SubscriptionName (Get-AzureSubscription -Default).SubscriptionName -CurrentStorageAccountName "deterposhstorage"

Note - The above steps are one time activity. Once you have followed the above steps then next time you have to just load the Azure PS Module and start automation.


3. Create a Virtual Net for your LAB


In order to run a full blown LAB in Azure with my own DNS, AD etc.  I have to use Virtual Networks. Right now the easiest way to do this is using the portal as there are no cmdlet to create a new VNET, there is a Set-AzureVNetConfig which requires us to create and manipulate an XML file to create VNETs, but I was looking to do this ASAP ( there are links in resources section if you want to automate this part too).

Below is the XML which I got after adding the VNET from the portal




Below is how the VNet looks like in the Azure Management Portal:




Note that in the subnet "AD" the first usable IP address is 192.168.0.4

If you want to do this using PowerShell too (which I will eventually then refer the resources at the end).


4. Deploy VM

If you are deploying a VM for first time then you have to create an affinity group (optional) , cloud service & Storage account (mandatory).

Now let's define few PowerShell variables for Affinity Group, Cloud Service, Storage Account , DNS Server IP Address and Name of our Domain Controller.

001
002
003
004
005
006

$AffinityGroup = "DexAffinityGroup"
$cloudService = "DexCloudService"
$StorageAccount = "dexterposhstorage"
$DNSIP = '192.168.0.4' #the first usable IP address in our Subnet "AD"
$VMName = 'DexDC' #Name of the VM running our Domain Controller

Now time to create a new Affinity Group.
Also I have turned off Geo-replication as this is my test LAB (my preference).

001
002
003
004
005
006
007
008
009
010
#create a new Affinity Group for my Lab resources
New-AzureAffinityGroup -Name $AffinityGroup -Location "Southeast Asia" -Label DexLAB -Description "Affinity Group for my LAB" -Verbose


In Azure when you deploy a VM it is associated with a cloud service (which is a logical container for Azure resources). So let's create a new one 


001
002
003
#Now create a new Cloud Service
New-AzureService -ServiceName $cloudService -AffinityGroup $AffinityGroup -Label DexLAB -Description "Cloud Service for my LAB" -Verbose

The house keeping activities needed to deploy VMs is done for my Azure Subsccription. Now I need to select a Image from the gallery and use it to deploy my VMs. The cmdlet to get the images is Get-AzureImage but out of all the images am looking only for the latest Server 2012 R2 image.

I use the below to get the image stored in the variable $image (see the use of -OutVariable)



[ADMIN] PS C:\> Get-AzureVMImage | where { $_.ImageFamily -eq “Windows Server 2012 R2 Datacenter” } | Sort-Object -Descending -
Property PublishedDate | Select-Object -First 1 -OutVariable image


ImageName : a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201407.01-en.us-127GB.vhd
OS : Windows
MediaLink :
LogicalSizeInGB : 128
AffinityGroup :
Category : Public
Location : East Asia;Southeast Asia;North Europe;West Europe;Japan West;Central US;East US;East US 2;South
Central US;West US
Label : Windows Server 2012 R2 Datacenter, July 2014
Description : At the heart of the Microsoft Cloud OS vision, Windows Server 2012 R2 brings Microsoft's experience
delivering global-scale cloud services into your infrastructure. It offers enterprise-class
performance, flexibility for your applications and excellent economics for your datacenter and hybrid
cloud environment. This image includes Windows Server 2012 R2 Update.
Eula :
ImageFamily : Windows Server 2012 R2 Datacenter
PublishedDate : 7/21/2014 12:30:00 PM
IsPremium : False
IconUri : WindowsServer2012R2_45.png
SmallIconUri : WindowsServer2012R2_45.png
PrivacyUri :
RecommendedVMSize :
PublisherName : Microsoft Windows Server Group
OperationDescription : Get-AzureVMImage
OperationId : b556cf7a-a4e8-c744-8471-f0ea0e3473ca
OperationStatus : Succeeded



[ADMIN] PS C:\> $image.imagename
a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201407.01-en.us-127GB.vhd


While deploying VMs in Azure one has to build configurations before finally creating it, so let's build the first one to specify the VM Instance size, image name (from above) etc and store the config in a variable named $NewVM.

Note the use of Tee-Object to store the Object in Variable. Now people might wonder why not use the -OutVariable as above then a small hint , go ahead and use it and check the type of the object being returned ;)



[ADMIN] PS C:\>  New-AzureVMConfig -Name $VMName -InstanceSize Small -ImageName $image.ImageName -DiskLabel "OS" -HostCaching R
eadOnly | Tee-Object -Variable NewVM


AvailabilitySetName :
ConfigurationSets : {}
DataVirtualHardDisks : {}
Label : DexDC
OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVMModel.OSVirtualHardDis
k
RoleName : DexDC
RoleSize : Small
RoleType : PersistentVMRole
WinRMCertificate :
X509Certificates :
NoExportPrivateKey : False
NoRDPEndpoint : False
NoSSHEndpoint : False
DefaultWinRmCertificateThumbprint :
ProvisionGuestAgent : True
ResourceExtensionReferences :
DataVirtualHardDisksToBeDeleted :

Time to add another config to our VM which will specify the Admin User Name and Password for the VM:
001
002
$password = "P@ssw0rd321"
$username = "DexterPOSH"


[ADMIN] PS C:\> Add-AzureProvisioningConfig -Windows -Password $password -AdminUsername $username -DisableAutomaticUpdates -VM
$newVM


AvailabilitySetName :
ConfigurationSets : {DexDC, Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVMModel.NetworkC
onfigurationSet}
DataVirtualHardDisks : {}
Label : DexDC
OSVirtualHardDisk : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVMModel.OSVirtualHardDis
k
RoleName : DexDC
RoleSize : Small
RoleType : PersistentVMRole
WinRMCertificate :
X509Certificates : {}
NoExportPrivateKey : False
NoRDPEndpoint : False
NoSSHEndpoint : False
DefaultWinRmCertificateThumbprint :
ProvisionGuestAgent : True
ResourceExtensionReferences : {BGInfo}
DataVirtualHardDisksToBeDeleted :

The first VM deployed in our LAB will be a Domain Controller and we need to make sure that it gets the same local IP Address, that's why we created a Subnet named "AD" in our Virtual Network and we will place our VM there (only machine in that subnet, ensuring that it gets the first usable IPaddress there).
In addition to this as an extra precaution , we can use the cmdlet Set-AzureStaticVNetIP to bind the IP address to our VM.


001
002
003
004
005
# set the AD Subnet for this machine
 Set-AzureSubnet -SubnetNames AD -VM $newVM

 #set the Static VNET IPAddress of 192.168.0.4 for our VM
 Set-AzureStaticVNetIP -IPAddress $DNSIP -VM $newVM


After all the configurations being created, we will finally create the new VM


001
New-AzureVM -ServiceName $cloudService -VMs $newVM -VNetName "DexVNET"  -AffinityGroup DexAffinityGroup

As an alternative one can use the New-AzureQuickVM (use this if you are using Azure Automation feature). There are few cases where New-AzureVM fails miserably.

Note - In addition one can specify the -WaitForBoot (New-AzureVM) to pause the Script execution until the VM is up and ready.



Connecting to Azure VM using PSRemoting


Once the VM is up and running it is time to add a new disk to it for storing the SysVol folder for the AD Domain Services. I wanted to do this using PowerShell too as the Server 2012 supports disk management tasks. But for this I need to configure my laptop to be able to talk to the WinRM endpoint sitting behind the cloud service (by default RDP and WinRM endpoints for each of the VMs are opened).

Again this has already been explained at the below link:

http://michaelwasham.com/windows-azure-powershell-reference-guide/introduction-remote-powershell-with-windows-azure/


Following the above link, below code does the work for me:
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
$WinRMCert = (Get-AzureVM -ServiceName $CloudService -Name $VMName | select -ExpandProperty vm).DefaultWinRMCertificateThumbprint
$AzureX509cert = Get-AzureCertificate -ServiceName $CloudService -Thumbprint $WinRMCert -ThumbprintAlgorithm sha1

$certTempFile = [IO.Path]::GetTempFileName()
$AzureX509cert.Data | Out-File $certTempFile

# Target The Cert That Needs To Be Imported
$CertToImport = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $certTempFile

$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "Root", "LocalMachine"
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$store.Add($CertToImport)
$store.Close()

Remove-Item $certTempFile

After this I can remote in to my VM running up on Azure and perform all the tasks I want to, isn't it amazing ;)

Using the cmdlet Get-AzureWinRMUri, we get the connection URI.




001
002
003
#Now I can use the Get-AzureWinrmUri
    $WinRMURi = (Get-AzureWinRMUri -ServiceName $cloudService -Name $VMName).AbsoluteUri


also create credential object to be passed on when opening a PSSession.


001
002
003
004
005
006
007
008
#Convert our plain text password to secure string
$passwordsec = ConvertTo-SecureString -String $password -AsPlainText -Force
#create the Creds Object
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$passwordsec

#Open up a new PSSession to the Azure VM
$Session = New-PSSession -ConnectionUri $WinRMURi -Credential $cred

Hopefully if we did everything right we will have a PSSession open.


Add a new data disk to VM

Let's add the data disk now.

001
002
003
#add new data disk to store the NTDS and SysVol folders
Add-AzureDataDisk -CreateNew -DiskSizeInGB 20 -DiskLabel "NTDS" -LUN 0 -VM $DexDC  | Update-AzureVM

Please note that at the end we need to pipe the output of Add-AzureDataDisk to Update-AzureVM.

Now if you would have connected using RDP and opened the diskmgmt.msc then you could have added the new disk (GUI way).

But we are going to use PowerShell for that as the server we choose is Server 2012 R2 (which has the disk mgmt cmdlets shipped with it).


Below is the code, which will initialize , partition and format our new disk:
001
002
003
004
005
006
007
008
009
010
011

Invoke-Command -Session $session -ScriptBlock {
    Get-Disk |
    where partitionstyle -eq 'raw' |
    Initialize-Disk -PartitionStyle MBR -PassThru |
    New-Partition -AssignDriveLetter -UseMaximumSize |
    Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS" -Confirm:$false

                          
}

You can verify the result by running the Get-Disk cmdlet in the remote PSSession.


Install ADDS and a new domain

Perfect now we have everything to promote this Azure VM as the first domain controller for our new forest.

We will put the NDTS & SysVol folder in our new data disk we added.


001
002
003
004
005
006
007
008
009
010
011

Invoke-Command -Session $Session -ArgumentList @($password-ScriptBlock {
        Param ($password)
        # Set AD install paths
        $drive = get-volume | where { $_.FileSystemLabel -eq “NTDS” }
        $NTDSpath = $drive.driveletter + ":\Windows\NTDS"
        $SYSVOLpath = $drive.driveletter + ":\Windows\SYSVOL"
        write-host "Installing the first DC in the domain"
        Install-WindowsFeature –Name AD-Domain-Services -includemanagementtools
        Install-ADDSForest -DatabasePath $NTDSpath -LogPath $NTDSpath -SysvolPath $SYSVOLpath -DomainName "dex.com" -InstallDns -Force -Confirm:$false -SafeModeAdministratorPassword $password
    }

Reboot your VM and you have your test domain up and ready in the cloud (for me it is dex.com).

One more thing once all is done, I switched my Domain Controller to the ServerCore ;)

Below is the snippet which does it for me.




001
002
003
#Convert to Server Core
Invoke-Command -Session $Session -script { Uninstall-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell -Restart}



That's it for today, probably one more post will follow which will focus on doing this entire setup using the Azure Automation (workflows).

I will be showing this at Microsoft Community Day on 23rd August, let's see if I can get that recorded.

[UPDATE] You can find the Script Snippet in entirety at below link:
https://gist.github.com/DexterPOSH/ae7ddcc6fa6aafacebc4

Resources:

http://michaelwasham.com

http://blogs.technet.com/b/keithmayer/archive/2014/08/15/scripts-to-tools-auto-provisioning-azure-virtual-networks-with-powershell-and-xml.aspx

http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/03/03/Creating-Azure-Virtual-Networks-using-Powershell-and-XML.aspx

http://blogs.technet.com/b/kevinremde/archive/2013/01/19/create-a-windows-azure-network-using-powershell-31-days-of-servers-in-the-cloud-part-19-of-31.aspx


http://blogs.technet.com/b/keithmayer/archive/2014/04/04/step-by-step-getting-started-with-windows-azure-automation.aspx




PowerShell + WPF + GUI : Hide (Use) background PowerShell Console

$
0
0
Few years back, I had started wrapping my PowerShell scripts with some sort of GUI built using Windows Forms (used Primal Forms CE mostly). Things went fine for a while but then I stumbled across awesome posts by MVPBoe Proxon using WPF with PowerShell to do the same. (check Resources section)

I had been procrastinating the idea of playing with WPF for a while but then had a great discussion with MVP Chendrayan(Chen) and got inspired to do it.

One can use Visual Studio (Express Edition - which is free) to design the UI and then consume the XAML in PowerShell script...Isn't that Cool ! See resources section for links on that.

Often when we write the Code to present a nice UI to the end user there is a PowerShell console running in the background. In this post I would like to share a trick to hide/show the background console window. This trick works with both Winforms and XAML.

Note - PowerGUI & Visual Studio Express are absolutely FREE !

For the demo of this post I have a GUIdemo.ps1 script with below contents :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
[xml]$xaml= @"
<Window
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    x:Name="HideWindow" Title="Initial Window" WindowStartupLocation = "CenterScreen"
    Width = "335" Height = "208" ShowInTaskbar = "True" Background = "lightgray">
    <Grid Height="159" Name="grid1" Width="314">
        <TextBox Height="46" HorizontalAlignment="Left" Margin="44,30,0,0" Name="textBox" VerticalAlignment="Top" Width="199" />
        <CheckBox Content="Show PS Windpw" Height="52" HorizontalAlignment="Left" Margin="34,95,0,0" Name="checkBox" VerticalAlignment="Top" Width="226" FontSize="15" />
    </Grid>
</Window>
"@

Add-Type -AssemblyName PresentationFramework
$reader=(New-Object System.Xml.XmlNodeReader $xaml)
$Window=[Windows.Markup.XamlReader]::Load( $reader )

#Tie the Controls
$CheckBox = $Window.FindName('checkBox')
$textbox = $Window.FindName('textBox')


$CheckBox.Add_Checked({$textbox.text = "Showing PS Window"Show-Console})
$CheckBox.Add_UnChecked({$textbox.text = "Hiding PS Window"Hide-Console})
$Window.ShowDialog()


Save the above contents in a file and right click on it select "Run with PowerShell".


This will open up a PowerShell console window and our simple UI , right now if you check the checkbox it writes to the textbox but later on we will be able to toggle the background PowerShell Window on and off.


One of the simplest ways to hide this window as I have shown in one of my earlier postis by using PowerGUI to wrap it as an exe 

Open the Script in PowerGUI Script Editor and then go to Tools > Compile Script




This will open up another window where you can select to not show the PowerShell console.



But this option will either permanently hide the window or show it. What if I wanted to give the end users an option to toggle the background PowerShell window on and off. You would ask what purpose would it serve ?

Well I have been using this technique for a while to see various verbose messages being generated by my backend PowerShell functions.


In addition to that we can use write-host to highlight few key things in the console (Write-Host is perfect candidate here cause we just want to show stuff on the host).

So let's add the code which will provide the functionality to toggle the background PowerShell console.

Re-used the code provided at PowerShell.cz to P/Invoke.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021

#Function Definitions
# Credits to - http://powershell.cz/2013/04/04/hide-and-show-console-window-from-gui/
Add-Type -Name Window -Namespace Console -MemberDefinition '
[DllImport("Kernel32.dll")]
public static extern IntPtr GetConsoleWindow();

[DllImport("user32.dll")]
public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
'


function Show-Console {
$consolePtr = [Console.Window]::GetConsoleWindow()
[Console.Window]::ShowWindow($consolePtr, 5)
}

function Hide-Console {
$consolePtr = [Console.Window]::GetConsoleWindow()
[Console.Window]::ShowWindow($consolePtr, 0)
}


Now time to bind the functions above to the checkbox control and use few Write-Host statements in the code to make my case.


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
[xml]$xaml= @"
<Window
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    x:Name="HideWindow" Title="Initial Window" WindowStartupLocation = "CenterScreen"
    Width = "335" Height = "208" ShowInTaskbar = "True" Background = "lightgray">
    <Grid Height="159" Name="grid1" Width="314">
        <TextBox Height="46" HorizontalAlignment="Left" Margin="44,30,0,0" Name="textBox" VerticalAlignment="Top" Width="199" />
        <CheckBox Content="Show PS Windpw" Height="52" HorizontalAlignment="Left" Margin="34,95,0,0" Name="checkBox" VerticalAlignment="Top" Width="226" FontSize="15" />
    </Grid>
</Window>
"@

Add-Type -AssemblyName PresentationFramework
$reader=(New-Object System.Xml.XmlNodeReader $xaml)
$Window=[Windows.Markup.XamlReader]::Load( $reader )

Write-Host -ForegroundColor Cyan -Object "Welcome to the Hide/Show Console Demo"
Write-Host -ForegroundColor Green -Object "Demo by DexterPOSH"
#Tie the Controls
$CheckBox = $Window.FindName('checkBox')
$textbox = $Window.FindName('textBox')


#Function Definitions
# Credits to - http://powershell.cz/2013/04/04/hide-and-show-console-window-from-gui/
Add-Type -Name Window -Namespace Console -MemberDefinition '
[DllImport("Kernel32.dll")]
public static extern IntPtr GetConsoleWindow();

[DllImport("user32.dll")]
public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
'


function Show-Console {
$consolePtr = [Console.Window]::GetConsoleWindow()
[Console.Window]::ShowWindow($consolePtr, 5)
}

function Hide-Console {
$consolePtr = [Console.Window]::GetConsoleWindow()
[Console.Window]::ShowWindow($consolePtr, 0)
}

Hide-Console # hide the console at start
#Events
Write-host -ForegroundColor Red "Warning : There is an issue"
$CheckBox.Add_Checked({$textbox.text = "Showing PS Window"Show-Console})
$CheckBox.Add_UnChecked({$textbox.text = "Hiding PS Window"Hide-Console})
$Window.ShowDialog() | Out-Null


Now copy the above code in PowerGUI and wrap it as an exe. Once done open the exe . Below is an animated GIF showing it in action (at the end it's a bit blurry cause of low FPS capture in gifcam). 




PowerShell + SCCM - POSH Deploy v1

$
0
0
I wrote an article for the coveted PowerShell Magazine on how to automate query based deployments in ConfigMgr using PowerShell.

http://www.powershellmagazine.com/2014/07/22/automating-deployments-in-configuration-manager-with-powershell/

If you go through that article you will get a background of this post.


So continuing to that, I present "POSH Deploy v1" which can be used to automate Query Based deployments with Configuration Manager (tested it on CM 2012 only). I had a similar tool built in my previous project for SCCM 2007 but that one had a lot of hard coded values, tried to remove those.

The tool earlier used Winforms and this time I kicked myself to try out WPF with PowerShell. Thanks to StackOverflow and blog posts shared around these topics by awesome community people :)


Personally I feel WPF have made things a bit simpler for me (less code) and extending functionality is a breeze. Not an expert right now on WPF but am getting around it. If you have feedback then it is welcomed ;)

[UPDATE] Shout out to few guys :  James Maggin for testing it out with patience, Harjit Dhaliwal who initially motivated me to do this. Michael Blanik who contacted me through my blog and gave great feedback (updated my Script) and the awesome community which drives this as a whole.

Michael asked me why to go for Query Based Rules instead of Direct Membership ones and I remembered reading this article by Eswar Koneti here.


Below is the Technet Link to the Script:
http://gallery.technet.microsoft.com/POSH-Deploy-Tool-to-ffc25b36



P.S. - No need to say it but, Please test it thoroughly in Test environment before hitting your PROD ones.

So let's start with the tool UI, it's a very basic UI. The Action button is disabled at start.



Steps to follow :
1. Enter your ConfigMgr Server Name (one with SMS Namespace Provider installed).
2. Then hit "Test SMS Connection"
3. After a successful connection has been established to the ConfigMgr Server, Hit the "Syn Collections List" button. This will dump all your Device Collections list in the User's MyDocuments folder by the name Collection.csv. 

Note - The Collection.csv won't contain the collection names matching the pattern "All*". This was done so that accidentally someone does not play with the Collections like All Systems, All Mobile devices etc.

Once you have completed above steps. You will the collection list being populated. The Action button gets activated after successful test connection.





There are basically two actions which can be performed with this tool:

  • Add Name to Query
  • Remove Name from Query
I tried to explain few things in below video (at the end there was an error thrown for direct membership rule..modified the code and it handles it now):





Time to give a little background on the tool. The tool only works with 

Add Name to Query

If you select "Add Name to Query" checkbox the Action Button text changes to "ADD" and when you input few machine names, select few collections and hit the Action Button.
Behind the scenes a PowerShell function takes the computernames and the selected collections and looks for a QueryMembership Rule by the name "Automated_QueryRule" on the collection (if not found creates one) and then does text manipulation on the Query Expression of the QueryRule. The end result is the Computer Name gets added to the QueryRule.

The important point to note here is that the PowerShell function only touches the QueryMembershipRule with the name "Automated_QueryRule", so all rest of your Rules are safe :)


Remove Name from Query

To perform this action you basically follow the same steps as above.
Select Checkbox "Remove name from Query" (You have to un-select the another checkbox to select this one). Key in computernames , select collections and hit Action button.

The main key difference on how this action works is that it will iterate over each of Query Membership rule for a collection and remove the computer names from it.

NOTE !
A little note on the "Collection Integrity Check" button, sometimes the tool will just crash (fixing that) while a certain operation in progress. So in order to maintain the last good known Query Membership Rule from the PS_Deploy.csv this button can be used.

It will by default select last 3 entries in the CSV and check if the entry in CSV is in sync with the Query on the Collection. If not then it will create/ modify the Query. Use this with caution !! Haven't tested this much.

PowerShell + Azure Automation : Deploy a Windows 10 VM (Server Tech Preview) & domain join

$
0
0
Recently VM running Server Technical Preview were added Azure Gallery and I thought of deploying a VM running it joined to my test domain (see my post on using PowerShell to deploy a DC on Azure ), But I wanted to do that using Azure Automation.

Azure Automation is a highly scalable workflow engine offering where we can have Runbooks (PowerShell workflows) to automate long running , error prone tasks.

I first saw this in action at one of the sessions by Ravi Sir [PowerShell MVP] at one of the PowerShell Bangalore User Group meet where he used Runbooks to demonstrate really cool stuff.

Note - Azure Automation is still in preview so you might have to sign up for it by navigating to https://account.windowsazure.com/PreviewFeatures

Divided the post into 3 steps:
1. Get the Ground work ready
2. Create the Assets
3. Create a RunBook to deploy the VM (and add it to the domain)


Get the Ground work ready


First of all create an Azure Automation account from the portal (no PowerShell cmdlet for that at the moment). Once done it should show up in the Azure portal like below :








So before we start doing anything else we need a mechanism in place to authenticate to Azure against my subscription. There are two ways to achieve that (links in Resource section) :

  1. Using Azure Active Directory
  2. Using Certificate Based Authentication

In my opinion using Azure AD for the authentication is better and easier to explain (am gonna use that for this post) . The below steps of using Azure AD for Authentication are borrowed from the Azure blog found here.

Now Let's head to our Azure Active Directory > Default Directory > Users > Add User.






Then after this you will see a wizard to add a User, Key in the UserName:
Click Next, Be careful and do not enable Multi-Factor auth.



On next screen , Click "Create Temporary password".  Make a note of Complete Username and temporary password. We will change it in next step.



Now logout of the browser or open another web browser and navigate to https://manage.windowsazure.com/ .

Now on the Sign-in page , you have to specify the username of the User you created above (that's why we had to make a note of the Username and Password).
After this you will be asked to enter password (key in the temporary password). Log-in and you will be asked to change your password, do that.


Create the Assets

Let's look at what we are trying to do here, We are going to author a PowerShell Workflow which will deploy a VM with Server Technical Preview on Azure and then add the machine to my domain :)

But before we get ahead of ourselves we have few important questions to answer here.

  1. How does my workflow authenticate to Azure to add a new VM to my Cloud service ?
  2. How to set Local Admin Username and Password for the VM deployed ? (We can hardcode these). 
  3. Once the VM is up how to add it to the domain using another set of Credentials ?


This is where the Assets kick in (no Cmdlets as of now ).


We will have to create 3 Credential assets (stores Username and Password) to tackle this problem at hand. Creating Assets is very straight ahead :

Navigate to Azure Automation > Your Automation Account > Assets > Click on "Add Setting" (at the bottom). After this you will be presented with a page like below :

Select "Add Credential"

Give a Name to the Asset, have kept the Asset name same as the Azure AD User name.


 In the next screen give the User name and Password of the User we added in the first step to Azure AD.



The first asset is created, Now I will similarly add 2 mores credentials asset named "DomainDexterPOSH" of a User in my Domain dex.com which has permissions to add a Machine to my domain and another set of credential called "LocalDexterPOSH" to set the local admin username and password for the New VM which our workflow will provision.



Create a RunBook to deploy the VM



Under the Automation account in Azure portal, I can create a new RunBook from Scratch (supports authoring workflows) or create a workflow locally and upload it using the Azure Automation cmdlets. The Azure Automation cmdlets are evolving at the moment , So for this post let's focus more on using the portal to author RunBooks.

Let's get familiarize with how a  runbook looks like in the Azure portal.
To create a new runbook in portal you can click "New" and give it a name (see below)



Your runbook name should be unique among your runbooks.
Once done you will see the Runbook sitting in your Azure Automation account.





Click on it and it lands you to the editing area for the Runbook. See the below screenshot :



Notice that the Workflow Name and the Runbook name has to be the same.

The Workflow editor is pretty cool, you can play with that. This is one way of authoring things up in the Web browser.

Now the Runbooks in Azure Automation are essentially PowerShell workflows and they have few differences from the workflows which we author locally (if you have done that ). The whole workflow is available at the below link: 
https://gist.github.com/DexterPOSH/e9dceb72a6f171bd3d97

Basically you copy the whole workflow and paste it in your runbook. 
Once you have your workflow authored and tested well, you can Publish it. This way you can use that Runbook inside another runbooks too.




Once the workflow is Published, you can run it by clicking on the "Start" button at the end (this only shows up after you publish it...If you want to run a workflow while authoring it then there is "Test" button see above pic):



Clicking on "Start" will prompt you to supply arguments to the parameters, like below:

After this you can view the Job




Note - The workflow is very specific to my test LAB on Azure and you will have to substitute values and tweak the workflow for your own environment.

Below is an attempt to explain what the workflow does.

the parameters our Runbook will take:


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
workflow New-TestVM
{
    param(
        [parameter(Mandatory)]
        [String]
        $AzureConnectionName,
  
        [parameter(Mandatory)]
        [String]
        $ServiceName,
   
        [parameter(Mandatory)]
        [String]
        $VMName,
                   
        [parameter()]
        [String]
        $InstanceSize = "Medium"
  
    )

So the Workflow named New-TestVM (Note here that the workflow name and the Runbook name should be the same) will take 4 parameters for specifying the below :


  1. Subscription Name ($AzureConnectionName)
  2. Cloud Service Name ($ServiceName)
  3. Name of the new VM to provision ($VMname)
  4. Instance size of the VM ($InstanceSize which by default is Medium)


The first step in a Runbook is to authenticate to Azure this is where we use the below code snippet :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
$verbosepreference = 'continue'

    #Get the Credentials to authenticate agains Azure
    Write-Verbose -Message "Getting the Credentials"
    $Cred = Get-AutomationPSCredential -Name "AuthAzure"
    $LocalCred = Get-AutomationPSCredential -Name "LocalDexterPOSH"
    $DomainCred = Get-AutomationPSCredential -Name "DomainDexterPOSH"



    #Add the Account to the Workflow
    Write-Verbose -Message "Adding the AuthAzure Account to Authenticate"
    Add-AzureAccount -Credential $Cred

    #select the Subscription
    Write-Verbose -Message "Selecting the $AzureConnectionName Subscription"
    Select-AzureSubscription -SubscriptionName $AzureConnectionName

    #Set the Storage for the Subscrption
    Write-Verbose -Message "Setting the Storage Account for the Subscription"
    Set-AzureSubscription -SubscriptionName $AzureConnectionName -CurrentStorageAccountName "dexterposhstorage"

In the above code snippet we retrieve all of the Assets we created then we use one of them $cred to authenticate to Azure. After that we select the Subscription against which our workflow will run (& automate stuff). One more thing as we are going to create a new VM we will need to specify the storage account as well.


Now the below code snippet will :

  • Get the Image details using Get-AzureVMImage and store the ImageName property in a variable
  • Then we derive the Username and Password from the LocalCred which stores one of the Credential Asset
  • Finally we specify all the configurations to the cmdlet New-AzureQuickVM (e.g SubnetName, Username, Password, ImageName etc.). Note the user of -WaitForBoot switch will pass on the control to next activity once the VM is up and running.
  • After this let's do a checkpoint so that if something fails in our workflow past this point, it should be able to resume from here.



001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
#Select the most recent Server 2012 R2 Image
    Write-Verbose -Message  "Getting the Image details"
    $imagename = Get-AzureVMImage |
                     where-object -filterscript { $_.ImageFamily -eq “Windows Server Technical Preview” } |
                     Sort-Object -Descending -Property PublishedDate |
                     Select-Object -First 1 |
                     select -ExpandProperty ImageName

    #use the above Image selected to build a new VM and wait for it to Boot
    $Username = $LocalCred.UserName
    $Password = $LocalCred.GetNetworkCredential().Password

    New-AzureQuickVM -Windows -ServiceName $ServiceName -Name $VMName -ImageName $imagename -Password $Password -AdminUsername $Username -SubnetNames "Rest_LAB" -InstanceSize $InstanceSize  -WaitForBoot
    Write-Verbose -Message "The VM is created and booted up now..Doing a checkpoint"

    #CheckPoint the workflow
    CheckPoint-WorkFlow
    Write-Verbose -Message "Reached CheckPoint"

This will take some time and after the machine is provisioned, we have another task at hand of adding the new VM to our domain. This can be achieved by opening a PSSession to the new VM and performing the action.

Below is the code which will open a PSSession to the machine using the LocalCred Asset and perform the domain join passing the DomainCred as argument to the Scriptblock:


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028

    #Call the Function Connect-VM to import the Certificate and give back the WinRM uri
    $WinRMURi = Get-AzureWinRMUri -ServiceName $ServiceName -Name $VMName | Select-Object -ExpandProperty AbsoluteUri

    InlineScript
    {
        do
        {
            #open a PSSession to the VM
            $Session = New-PSSession -ConnectionUri $Using:WinRMURi -Credential $Using:LocalCred -Name $using:VMName -SessionOption (New-PSSessionOption -SkipCACheck ) -ErrorAction SilentlyContinue
            Write-Verbose -Message "Trying to open a PSSession to the VM $VMName "
        } While (! $Session)
 
        #Once the Session is opened, first step is to join the new VM to the domain
        if ($Session)
        {
            Write-Verbose -Message "Found a Session opened to VM $using:VMname. Now will try to add it to the domain"
                              
            Invoke-command -Session $Session -ArgumentList $Using:DomainCred -ScriptBlock {
                param($cred)
                Add-Computer -DomainName "dex.com" -DomainCredential $cred
                Restart-Computer -Force
            }
        }  
    }
#Workflow end



Resources:

Use Azure AD to authenticate to Azure
http://azure.microsoft.com/blog/2014/08/27/azure-automation-authenticating-to-azure-using-azure-active-directory/

PowerShell + Server 2016 TP3: Deploy using Azure Automation

$
0
0

With an awesome and exciting features being shipped in Server 2016 TP3, I am sure you are going to take a crack at it.

What better way  to deploy it n cloud in few minutes using Azure Automation Runbook. I did something similar a while back and have updated the Script to expose new parameters and by default use the Server 2016 TP3 image for now (you can override this value).


Also if you are an ITPro like me and have a test LAB running on Azure then you can pass in the arguments to -DomainName and -DomainCredName parameters and the Runbook actually retrieves the Automation Credential Asset and joins the Server 2016 TP3 VM to the AD domain you have running on Azure (make sure the VM is in a Subnet which is reachable to your DC and the DNS is configured properly for the name resolution.


If you are new to the whole Azure Automation thing, then do check out my previous post which explains how the Script is layout and how to use it:

PowerShell + Azure Automation : Deploy a Windows 10 VM (Server Tech Preview) & domain join (Script updated in Technet too)


Below is the whole script:



PowerShell + AD + Pester : Create new user using template - Day 1

$
0
0
I did a blog post, way back to create new users in AD using already existing user as a template, but many people commented about using the template didn't copy the Home Directory, logon script path, group membership etc. So finally I tried my hands on writing a Function which does a better job at this.

The idea is to write a New-ADUserFromTemplate function, to which you specify all the properties you want copied while creating a User from an existing User (template User).


Let's make it fun and write the code using the Behavior Driven development approach using Pester. This will probably a 2 part series :

  • Day 1 - Getting the Ground Work ready, Pester tests for Parameter, Help & Module dependency.
  • Day 2 - Write Pester tests and code for the actual function. Refactoring the Code.

So we plan to do BDD or TDD here which means we write tests first and then follow the below cycle :






Disclaimer - Not an expert on BDD/TDD , but constructive feedback is always welcomed.


Let's start by creating a fixture named New-ADUserTemplate using the New-Fixture Function:
PS>New-Fixture -Path .\New-ADUserFromTemplate -Name New-ADUserFromTemplate 
This creates a folder named NewADUserFromTemplate with two files inside it. 
PS>ls

Directory: C:\Users\Deepak_Dhami\Documents\WindowsPowerShell\Scripts\New-ADUserFromTemplate


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/16/2015 7:43 AM 2054 New-ADUserFromTemplate.ps1
-a--- 8/16/2015 6:39 AM 2162 New-ADUserFromTemplate.Tests.ps1


First one is an empty PS1 file with an empty Function definition for New-ADUserFromTemplate and other one matching *.Tests.ps1 is the File where our tests will live. If you haven't picked up on Pester yet then check out the resources section.

So we start by writing tests which define behavior of our code, at first we have any  empty function. So obviously our tests will fail (have patience). Now the next thing on our mind should not be to write an advanced function in PowerShell, instead write simplest code which passes all test. The idea is to get it right first before putting in all fancy stuff there.

Once the test start passing which means my code has achieved the behavior, I wanted.Now it is time to refactor the code (keep running tests when you modify code to make sure the code's behavior hasn't changed).



You won't realize the tremendous benefit this will have to your Scripting effforts until your Function or Module starts growing exponentially and when it does you would be thankful to yourself that you modeled the code properly from start. Below pic sums what I meant to say here :

Credits - memegenerator.net
So let's do it.

First determine the behavior of the Function which we are going to write. The thing which comes to my mind at first is to make sure that I put help and correct parameters in my function.

For the code I am writing I want to mimic my parameters to the ones which the AD User and Computers Snap-in GUI provides while copying a User, using a template. (select User > Right click "Copy").




Let's define the behavior of the Function:
  • It should have inbuilt help along with Description and examples.
  • It should have SamAccountName and GivenName (FirstName is givennameas Mandatory parameters. Also it should have mandatory Instance parameter which takes a AD User Object as input to use as a template.
  • My Function will depend on ActiveDirectory PS Module.
  • It should take the OU path from the Template User.
  • Based on some Constraints in AD Schema, we can only copy few attributes from a template User. Make sure allowed attributes if present on the template User are copied to the new User.
  • It should allow us to select a subset of allowed attributes to copy.
Wow that is easy, now let's rewrite our function's behavior in Pester. I will write the Pester tests and side by side refactor my code so that tests pass. 

To begin with Pester tests have this File preamble , where we dot source our Function/Module which we will test along with any helper functions. Below is how a Pester tests file looks :



Note that I named my Describe block "New-ADUserFromTemplate", as that is exactly what I am doing describing behavior of my Function to Pester.

In file preamble, I load the New-ADUserTemplate.ps1 along with HelperFunctions.ps1 (contains 2 functions named : Test-MandatoryParam & Compare-ADUser).
 

I tend to organize my tests in the Context of the testing I am doing. So you would see in above screenshot that there are 3 Context blocks which are organized in my mind in below way:
  1. Context "Help and Parameter Checks"
      
    It should have inbuilt help along with Description and examples.
      It should have SamAccountName GivenName & Instance as mandatory parameters.
  2. Context "ActiveDirectory Module Checks"
      It Should fail if the AD Module not present.
      It Should fail if the AD PSDrive is not loaded. (Extreme Case)
  3. Context "User Creation"
      It should take OU Path from template User.
      It should only copy allowed set of attributes from the User (by default).
      It should allow copying a subset of allowed set of attributes.  
See piece of cake, If you have thought over, how the code will behave before hand and written tests describing it then half of the job is done.

Let's start with the first Context block 
"Help and Parameter Checks".

Inside the Context block, we have It block which is essentially a unit test. It is inside this IT block that we put our assertions (in layman terms these are comparison between the expected and actual output etc.).
Below is  how my first Context block looks like :
     
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
   Context "Help and Parameter checks" {
        Set-StrictMode -Version latest
        It 'should have inbuilt help along with Description and examples' {
            $helpinfo = Get-Help New-ADUserFromTemplate
            $helpinfo.examples | should not BeNullOrEmpty  # should have examples
            $helpinfo.Details | Should not BeNullOrEmpty   # Should have Details in the Help
            $helpinfo.Description | Should not BeNullOrEmpty # Should have a Descriptiong for the Function
        }
     
        It 'Should have SamAccountName, GivenName & Instance Mandatory params' {
            {New-ADuserFromTemplate| Should Throw
            {New-ADuserFromTemplate -samAccountName $null } | should throw
            {New-ADuserFromTemplate -GivenName $null| should throw
            {New-ADuserFromTemplate -Instance $null } | should throw
            {New-ADuserFromTemplate -GivenName $Null -SamAccountName $null -Instance $Null } | Should Throw
        }
    } # end Context
        

First IT block tests that my Function help always has Examples, Details and Description. Credits to Andy Schnieder for sharing this.

Second IT block is a bit tricky here. I defined that the parameters SamAccountName and GivenName be mandatory parameters for my Function. Now the first assertion which naturally comes to mind is :
{New-ADuserFromTemplate| should throw 

But this won't work, we will see later why. So as a temporary workaround of not specifying anything to Mandatory parameters, I am passing them $Null.
{New-ADuserFromTemplate -samAccountName $null } | should throw

There are some otherways to look at the Function metadata (MVP Dave Wyatt pointed out that to me) but that will make this post deviate from the original objective. So my tests which define the help & parameters behavior on my function are ready.

Let's go through the Red phase first :

PS>invoke-pester -TestName 'New-ADUserFromTemplate'
Executing all tests in 'C:\Users\DexterPOSH\Documents\WindowsPowerShell\Pester_Tests\new-aduserfromtemplate' matching test name 'New-ADUserFromTemplate'
Describing New-ADUserFromTemplate
Context Help and Parameter checks

[-] should have inbuilt help along with Description and examples 204ms
Expected: value to not be empty
at line: 16 in C:\Users\Deepak_Dhami\Documents\WindowsPowerShell\Pester_Tests\new-aduserfromtemplate\New-ADUserFromTemplate.Tests.ps1
[-] Should have SamAccountName & GivenName as Mandatory params 37ms
Expected: the expression to throw an exception
at line: 22 in C:\Users\Deepak_Dhami\Documents\WindowsPowerShell\Pester_Tests\new-aduserfromtemplate\New-ADUserFromTemplate.Tests.ps1


All right, now let's go and work on making these tests pass by modifying our New-ADUserFromTemplate function , below is what I added to the Function definition :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
function New-ADUserFromTemplate
{
<#
.Synopsis
   Function which enables creating new users using a Template
.DESCRIPTION
   Function which will use a User as a template and then copy set of below attributes to the new user.

.EXAMPLE
    First get the AD user Stored in a variable with all the properties (it copies only a subset of properties on the Object supplied)
    PS> $TemplateUser = Get-ADUser -identity Test1 -Properties *
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
.EXAMPLE
   If the AD User Object doesn't have all the Properties on it then the Function only selects the available ones.
    PS> $TemplateUser = Get-ADUser -identity Test1
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
#>

[CmdletBinding()]
   param(
        [Parameter(Mandatory=$True)]     
        [string]$SamAccountName,

        [Parameter(Mandatory)]     
        [string]$GivenName,

        [Parameter(Mandatory)]
        [Microsoft.ActiveDirectory.Management.ADUser]$Instance
   )
}

Once this is done, I invoke the Pester again and see the test in the Context passing, But :




Let's remove the blocking test and run the Pester tests again, we see Green :



Now it is time to refactor. Make sure that after any changes you make, run Pester to validate that nothing has changed.

Moving on to the next Context of testing. It looks like below :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
 Context "ActiveDirectory Module Available" {      
        $TemplateUser = [pscustomobject]@{
                                            Name='testuser'
                                            UserPrincipalName='testuser@dex.com'
                                            PStypeName = 'Microsoft.ActiveDirectory.Management.ADUser'
                                            }
      

        It "Should Fail if the AD Module not present" {
            Mock -CommandName Import-Module -ParameterFilter {$name -eq 'ActiveDirectory'-MockWith {Throw (New-Object -TypeName System.IO.FileNotFoundException)} -Verifiable
            {New-ADUserFromTemplate -SamAccountName test123 -GivenName 'test 123' -Instance $TemplateUser } | should throw          
            Assert-VerifiableMocks
        }      
    }

Note that in the Context scope, a Custom Object $TemplateUser in initialized, which will be passed later on to the Function (mandatory).

Now , the problem is my machine has the AD Module, so how does I simulate a situation where the machine doesn't have the AD module. This is where mocking comes into picture.


In a machine where the AD module is present, if I run below: 

Import-Module -name ActiveDirectory -ErrorAction stop

It would succeed, but in a machine which doesn't have the module named ActiveDirectory an exception will be thrown. This would be our mock.
Mock -CommandName Import-Module -ParameterFilter {$name -eq 'ActiveDirectory'} `
     -MockWith {Throw (New-Object -TypeName System.IO.FileNotFoundException)} -Verifiable 

Above, We mock -> Import-Module -name ActiveDirectory . Notice -ParameterFilter {$name -eq 'ActiveDirectory' mocks only the Import-Module cmdlet when it is passed an argument 'ActiveDirectory' to the -Name parameter. 

Also the  -Verifiable switch at the end of the mock makes it verifiable, how ?
Simple use 
Assert-VerifiableMocks at the end in the IT block. It will verify that the mocks with -Verifiable switch were called in the Function run.

After writing tests, run them (Red Phase), then write the bare minimum code to make it pass (Green) and keep refactoring. Skipping Red and jumping to Green for the above context (as you already have an idea).

My bare minimum Function definition passing both the Context tests is below :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042

function New-ADUserFromTemplate
{
<#
.Synopsis
   Function which enables creating new users using a Template
.DESCRIPTION
   Function which will use a User as a template and then copy set of below attributes to the new user.

.EXAMPLE
    First get the AD user Stored in a variable with all the properties (it copies only a subset of properties on the Object supplied)
    PS> $TemplateUser = Get-ADUser -identity Test1 -Properties *
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
.EXAMPLE
   If the AD User Object doesn't have all the Properties on it then the Function only selects the available ones.
    PS> $TemplateUser = Get-ADUser -identity Test1
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
#>

[CmdletBinding()]
   param(
        [Parameter(Mandatory=$True)]      
        [string]$SamAccountName,

        [Parameter(Mandatory)]      
        [string]$GivenName,

        [Parameter(Mandatory)]
        [Object]$Instance
   )
    TRY {
        # try to import the Module
        Import-Module -name ActiveDirectory -ErrorAction stop
        $null = Get-PSDrive -Name AD -ErrorAction stop  # Query if the AD PSdrive is loaded
     
    }
    CATCH [System.IO.FileNotFoundException]{
        Write-Warning -Message $_.exception
        throw "AD module not found"
    } 
    CATCH {
        throw $_.exception
    }
}

Below is the result of my pester tests in Green phase :


PS>invoke-pester

Describing New-ADUserFromTemplate
Context Help and Parameter checks

[+] should have inbuilt help along with Description and examples 233ms
[+] Should have SamAccountName, GivenName & Instance Mandatory params 57ms

Context ActiveDirectory Module Available
WARNING: Unable to find the specified file.
[+] Should Fail if the AD Module not present 106ms


Oh My God !  Why to go to this length trouble for a small function ?

You are right ! It is a whole lot of trouble but when this function starts to grow or it becomes part of a bigger module or Script running in Production. Running tests and seeing them pass would give you "Confidence" or "Trust" in your code.

Also you don't have to test each scenario manually ;)

In next post, we will dive into writing tests (first) and code which copies the attributes for a User from a template user. I am presenting on the very same topic at PowerShell Conference Asia @ Singapore next week and I hope to make a strong case for Pester there.


Resources:


Practical PowerShell Unit Testing : Getting Started (Fantastic article)
https://www.simple-talk.com/sysadmin/powershell/practical-powershell-unit-testing-getting-started/


PowerShellMag articles on Pester:
http://www.powershellmagazine.com/tag/pester/

Copy User's Properties

https://technet.microsoft.com/en-us/library/dd378959(v=ws.10).aspx

AD Constraints

https://msdn.microsoft.com/en-us/library/cc223462.aspx





PSConfAsia : My experience

$
0
0
Since it has been few weeks after PowerShell conference Asia, I have finally kicked of the laziness and made up my mind to blog about my experience around it.

PowerShell Conference Asia



Initial Hurdle

The first thing that came to my mind was.....Cost!. I will circle back to this factor at the end.


Background

Milton, one of the organizers & friend asked me if I could speak about Pester. I was doing some stuff with Pester at that time and was reading the below book:

This book gives you background on Why, What and How aspect of testing philosophy. These might sound very silly to the Devs, but being System Admin who just stepped up his game, I needed these questions answered.

While reading this book, I had glimpses of past when I had a notion that Testing is for Devs only. Till that point, the below sums up my testing methodology :



I made a note that in my session, I would focus more around the philosophy rather than the technical details as there are already great resources for Pester out there.


Finally Singapore -> PSConfAsia

The conference was my first PowerShell event outside India.

I was amazed to see the energy level of the crowd. Everyone was carrying laptops, handhelds or notebooks and I found many people taking notes, generating PowerShell transcripts etc.


I met few of the participants and was surprised to see that many participants were Developers, who were already using PowerShell. While interacting with people their, I realized that PowerShell has been a key to the evolution of managing Infrastructure for shops running Windows. 

Sometimes I wonder if, It was all part of grand scheme, orchestrated by Jeffrey Snower & PowerShell team at MSFT to turn an average Windows Admin (GUI driven) to a full blown automation driven ninja.

During the conference I had a pleasure meeting all the people whom I know via Social Networks. There were so many sponsored goodies for the participants that I was wondering if I would have been one e.g. F1 Ticket, iPad mini, DSC, DevOps related books, PowerShell Studio license by Sapien and many more.


Below are few of snaps :



KeyNote by none other than Jeffrey Snower himself (via Skype)


Nana and Ravi Sir - Just before their session :)
There sessions were my favorite on DSC and Infra as Code.


Panel discussion revolving around DevOps, Infra as Code etc. This was fun as we got to hear from the some well known names in the IT industry.

From Left to Right - Ferninand Rios, Jaap Brasser, Ryan Yates, Ravikanth Chaganti, Narayanan (Nana) Lakshmanan and Jason Brown.


Below is a pic of how colorful the MSFT Singapore Office is :



First day Speaker's Dinner was sponsored by Kemp Technologies and we all enjoyed a lot of local delicacies.

Below Jaap making a joke about his Birthdate


Just a reference on how much awesome the place was, below is what we got in Dessert :)




Nana in background while delivering his DSC Internals session. Foreground is my machine (Using ISE  to make notes, this is how I roll :D )



Dell has a PowerShell BIOS Provider which exposes the BIOS settings as a PSDrive. Girish Prakash from Dell R&D delivered an informative session on why the implementation choose PS Providers over Cmdlets and demoed the Provider.




Later in the evening there were Drinks for everyone at a nearby cafe. Before that I went for a little stroll with Jaap and Eswar Koneti (It was mostly technical things We talked about)

Eswar and me


Jaap and Me


Hoping you get the below :
PS>"{0} > {1} > {2}" -f $Me, $Miton, $Ravi


Circling back to the Cost Factor the very first thing that came to my mind. Looking back at the experience of meeting people from PS Community and learning few things in the process, I have to say that it was all worth it.

I have started making plans already for a next conference.
Cheers

PowerShell + AD + Pester : create new user using template Part 2

$
0
0
It seems like it has taken me forever to post this one. I had this one almost ready but then I asked few questions around, read a lot of posts and had to rewrite the pieces of the post, to sum it all it has been an eye opening when trying to test PowerShell code which interacts with Infrastructure.

Below pic depicts my state at this point ( revelation to a whole new world).


[ credits : movie "V for Vendetta"]

In the last post, we laid the foundation for our Function. Go back and check the code their as we start from where we left off.

In this post we dive straight into the third context for our Pester tests :
  1. Context "User Creation"
      It should return Object when -Passthru specified (New addition)
      It should take OU Path from template User.
      It should only copy allowed set of attributes from the User (by default).
      It should allow copying a subset of allowed set of attributes
    .
      
Note - I have added one more test to the context, which is in green. Why rest of the functions are marked in Red :O ? Answer to this follows in the conclusion section.


Before writing the tests, I wanted to share one important concept while practicing TDD/BDD. 

If at any point it becomes a pain writing tests then you are probably doing it wrong. Tests should be easy to write otherwise they depict a serious flaw in the logic you have. [Lesson learned hard way]

Also, one has to remember that Unit tests only test the logic of the code. So if I wrote code to create an AD user, my unit tests shouldn't be creating a test user each time (this would clutter things up). Easier way to test the logic is to mock the key pieces which have external dependency.

But Mocking is a slippery slope, when we use mocking we are actually testing the behavior of the code rather than the state that is why this approach is also termed as "Behavior" based testing.



Setting up Context


A quick search found me below link on Technet (going to use this as reference for my function and will keep refactoring this).
https://technet.microsoft.com/en-us/library/dd378959(v=ws.10).aspx

So it is clear, New-ADUser is the cmdlet (part of the AD PowerShell module) which will be ultimately called for creating user in function.

In my above example, querying AD is an external dependency it has nothing to do with the logic for my code. So it is wise to mock those cmdlets which will have sort of external dependency.


It is obvious that in order to create a new user I would be using New-ADUser cmdlet from the AD PowerShell module, but I will have to mock it and assert that it is being called by my code. So the context for my "user creation" tests will look like below to begin with :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
    Context "User Creation" {
 
        BeforeEach {
            # Create a Dummy AD module and import it
            $DummyModule = New-Module -Name ActiveDirectory  -Function "New-ADUser","Get-ADUser" -ScriptBlock {
                                                                            Function New-ADUser {"New-ADUser"} ;
                                                                        }
            $DummyModule| Import-Module
        }

        AfterEach {
            # Forcefully remove the Dummy AD Module
            Remove-Module -Name ActiveDirectory -Force  -ErrorAction SilentlyContinue
        }
 
    }



Before moving forward take a look at the Context block one more time. The BeforeEach{} and AfterEach{} block provide a way to setup and teardown test environment for each Unit test run (the It{} blocks).
Note - If you haven't heard about the BeforeEach {} block earlier then take pause and read Michael Sorens article on Test anatomy.

So in the BeforeEach{} block, I create a dummy module named ActiveDirectory (AD PowerShell module which is not present in my local machine) and export Get-ADUser, New-ADUser functions.

In the AfterEach{} block, the Remove-Module is called to unload the dummy module forcefully. Because of the above hack, I had to modify the Module loading part in my code(see the try block below which got changed). 
Now the code tests if the module is already loaded because the dummy module is loaded in Beforeach{} block so the function should see it and do not try to load the module again.


001
002
003
004
005
006
007
        TRY {
            if ( -not (Get-Module -Name ActiveDirectory) ) {
                # try to import the Module
                Import-Module -name ActiveDirectory -ErrorAction stop
                $null = Get-PSDrive -Name AD -ErrorAction stop  # Query if the AD PSdrive is loaded
            }
        }

The reason to use the dummy module are :
  1. Pester tests don't depend on AD PowerShell module.
  2. AD PowerShell module is too complicated in implementation, Read more here

Test - It should return Object when -Passthru specified (New addition)


Now let's take a look at the test.

001
002
003
004
005
006
007
008
It "Should return object when -Passthru specified" {
            $TemplateUser = [PSCustomObject]@{Name='templateuser';UserPrincipalName='templateuser@dex.com'}
            Mock -CommandName New-ADuser  -MockWith {@{name='testuser'}} -Verifiable
            $CreatedUser = New-ADUserFromTemplate -GivenName 'test 123' -SamAccountName 'test123' -Instance $TemplateUser -Passthru
            Assert-VerifiableMocks # Assert that our verifiable mock for New-ADuser cmdlet was called.
            $Createduser | Should Not BeNullOrEmpty
     
        }

Inside the Unit test, I begin by storing a custom object in $TemplateUser which will be passed to our function.
Then the New-ADUser function (Yes! it is a function loaded from my dummy AD module) is mocked to return a hashtable (it could be a custom object too). Observant eye will notice that the mock has been marked -verifiable.


Later the Assert-VerifiableMocks is called to verify that all the verifiable mocks have been invoked. Also at the end I assert that the $CreatedUser should not be empty (-Passthru should return Object).

The tests fail at this point (Red phase) because I didn't modify the code, so I change my function definition.

So my bare bone code looks like below now (removed help for this one only here in this illustration not from the actual code). People will criticize me for passing the -Name an argument same as SamAccountName but that is fine, more focus is on the testing philosophy here.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
function New-ADUserFromTemplate {

   param(
        # SPecify the unique SamAccountName for the User
        [Parameter(Mandatory)]               
        [ValidateNotNullOrEmpty()]
        [string]$SamAccountName,
   
        # Specify the First Name or Given name for the user
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$GivenName,

        [Parameter(Mandatory)]
        #[PSTyepName('Microsoft.ActiveDirectroy.Management.ADUser')]
        [Object]$Instance,

        [Switch]$Passthru
        )

        TRY {
            if ( -not (Get-Module -Name ActiveDirectory) ) {
                # try to import the Module
                Import-Module -name ActiveDirectory -ErrorAction stop
                $null = Get-PSDrive -Name AD -ErrorAction stop  # Query if the AD PSdrive is loaded
            }
        }
        CATCH [System.IO.FileNotFoundException]{
            Write-Warning -Message $_.exception
            throw "AD module not found"
        }
        CATCH {
            throw $_.exception
        }         

        # Let's start by following the link : https://technet.microsoft.com/en-us/library/dd378959(v=ws.10).aspx
        $Instance.UserPrincipalName = $Null

        if ($Passthru.IsPresent) {   
            New-ADuser -Name $SamAccountName -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False -Passthru
        }
        else {
          $null =  New-ADuser -Name $SamAccountname -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False
        }  
}

Now if I run the test then it should pass (Green phase) this particular unit test.

Test - It should NOT return object (by default)


Notice how I have added two more tests as I get clarity on the behavior of the function.
This unit test will check the exact opposite behavior of the previous test.



001
002
003
004
005
006
007
It "Should NOT return object by default" {
            $TemplateUser = [PSCustomObject]@{Name='templateuser';UserPrincipalName='templateuser@dex.com'}
            Mock -CommandName New-ADuser  -MockWith {@{name='testuser'}} -Verifiable
            $CreatedUser = New-ADUserFromTemplate -GivenName 'test 123' -SamAccountName 'test123' -Instance $TemplateUser
            Assert-VerifiableMocks # Assert that our verifiable mock for New-ADuser cmdlet was called.
            $Createduser | Should  BeNullOrEmpty
        }

The key difference in this unit test is that $Createduser should be empty. 

When I run the Pester test at this point the second unit test fail, but wait in the function definition New-ADUser cmdlet is not passed -Passthru switch if it is not specified to the New-ADUserFromTemplate function.

See below excerpt from the function, it should have taken care of the test (right?)


001
002
003
004
005
006
        if ($Passthru.IsPresent) {  
            New-ADuser -Name $SamAccountName -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False -Passthru
        }
        else {
            New-ADuser -Name $SamAccountname -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False
        }

This is the downside of using a dummy dynamic module the BeforeEach{} trick, because the code just sees a New-ADUser dummy function, it doesn't mimic the behavior of the cmdlet. This approach comes with this downside but this shouldn't be hard to fix.

If I change the above code excerpt (if else condition) like below :



001
002
003
004
005
006
007
        if ($Passthru.IsPresent) {  
            New-ADuser -Name $SamAccountName -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False -Passthru
        }
        else {
          $null =  New-ADuser -Name $SamAccountname -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False
        }      

Now all the tests are passing, below is a screenshot showing that. 


Conclusion

We are at the end of this post, few more posts will follow. Wait ! What ?
No more tests (marked in red below) as mentioned in the starting of the post for the context block !!!

Context "User Creation"
  It should take OU Path from template User.
  It should only copy allowed set of attributes from the User (by default).
  It should allow copying a subset of allowed set of attributes.
Honestly, I had all those unit tests written but later I realized that the tests are actually checking the state of the Created user rather than the behavior of the function. If you have looked carefully the object returned from my function New-ADUserFromTemplate in the current context is a mocked object ( it is in my control, what I wish to return).

So ask yourself does it make sense to run tests which check the state of a mocked object ? You can only run these tests when you get an actual AD User which was created in AD. 

To summarize these tests are testing the "State" of the Object hence we can't use mocking here. This completely changes the initial strategy I had in my mind for testing my code for good.

Please read this article by Matt Wrock as it perfectly describes the dilemma I faced while writing this post, especially the part where he talks about mocking infrastructure.

http://www.hurryupandwait.io/blog/why-tdd-for-powershell-or-why-pester-or-why-unit-test-scripting-language

If you have anything to add on to my current approach then the feedback is much appreciated.

Below is the final state my code has achieved

Full Test Suite :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")

. "$here\$sut"

#region Unit Test - Test only the logic, Mock the Shit out of Variables !
Describe "New-ADUserFromTemplate" -Tags 'UnitTest'{


    Context "Help and Parameter checks" {
        Set-StrictMode -Version latest
  
        It 'should have inbuilt help along with Description and examples' {
            $helpinfo = Get-Help New-ADUserFromTemplate
            $helpinfo.examples | should not BeNullOrEmpty  # should have examples
            $helpinfo.Details | Should not BeNullOrEmpty   # Should have Details in the Help
            $helpinfo.Description | Should not BeNullOrEmpty # Should have a Descriptiong for the Function
        }

        It 'Should have SamAccountName, GivenName & Instance Mandatory params' {
            # {New-ADuserFromTemplate} | Should Throw
            {New-ADuserFromTemplate -samAccountName $null } | should throw
            {New-ADuserFromTemplate -GivenName $null} | should throw
            {New-ADuserFromTemplate -Instance $null } | should throw
            {New-ADuserFromTemplate -GivenName $Null -SamAccountName $null -Instance $Null } | Should Throw
        }
    } # end Context

    Context "ActiveDirectory Module Available" {
        $TemplateUser = [pscustomobject]@{
                                            Name='testuser'
                                            UserPrincipalName='testuser@dex.com'
                                            #PStypeName = 'Microsoft.ActiveDirectory.Management.ADUser'
                                            }


        It "Should Fail if the AD Module not present" {
            Mock Get-Module -MockWith {$Null}
            Mock -CommandName Import-Module -ParameterFilter {$name -eq 'ActiveDirectory'} -MockWith {Throw (New-Object -TypeName System.IO.FileNotFoundException)} -Verifiable
            {New-ADUserFromTemplate -SamAccountName test123 -GivenName 'test 123' -Instance $TemplateUser } | should throw    
            Assert-VerifiableMocks
        }
    }

    Context "User Creation" {
  
        BeforeEach {
            # Create a Dummy AD module and import it
            $DummyModule = New-Module -Name ActiveDirectory  -Function "New-ADUser","Get-ADUser" -ScriptBlock {
                                                                            Function New-ADUser {"New-ADUser"} ;
                                                                        }
            $DummyModule| Import-Module
        }

        AfterEach {
            # Forcefully remove the Dummy AD Module
            Remove-Module -Name ActiveDirectory -Force  -ErrorAction SilentlyContinue
        }
  
  
        It "Should return object when -Passthru specified" {
            $TemplateUser = [PSCustomObject]@{Name='templateuser';UserPrincipalName='templateuser@dex.com'}
            Mock -CommandName New-ADuser  -MockWith {@{name='testuser'}}  -Verifiable
            $CreatedUser = New-ADUserFromTemplate -GivenName 'test 123' -SamAccountName 'test123' -Instance $TemplateUser -Passthru
            Assert-VerifiableMocks # Assert that our verifiable mock for New-ADuser cmdlet was called.
            $Createduser | Should Not BeNullOrEmpty
      
        }

        It "Should NOT return object by default" {
            $TemplateUser = [PSCustomObject]@{Name='templateuser';UserPrincipalName='templateuser@dex.com'}
            Mock -CommandName New-ADuser  -MockWith {@{name='testuser'}} -Verifiable
            $CreatedUser = New-ADUserFromTemplate -GivenName 'test 123' -SamAccountName 'test123' -Instance $TemplateUser
            Assert-VerifiableMocks # Assert that our verifiable mock for New-ADuser cmdlet was called.
            $Createduser | Should  BeNullOrEmpty
        }

    } #end Context

} #end Describe


New-ADuserTemplate Function :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
 <#
.Synopsis
   Function which enables creating new users using a Template
.DESCRIPTION
   Function which will use a User as a template and then copy set of below attributes to the new user.

.EXAMPLE
    First get the AD user Stored in a variable with all the properties (it copies only a subset of properties on the Object supplied)
    PS> $TemplateUser = Get-ADUser -identity Test1 -Properties *
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
.EXAMPLE
   If the AD User Object doesn't have all the Properties on it then the Function only selects the available ones.
    PS> $TemplateUser = Get-ADUser -identity Test1
    PS> New-ADUserFromTemplate -SamAccountname newuser123 -GivenName NewUser -Instance $TemplateUser
#>

function New-ADUserFromTemplate {

   param(
        # SPecify the unique SamAccountName for the User
        [Parameter(Mandatory)]              
        [ValidateNotNullOrEmpty()]
        [string]$SamAccountName,
  
        # Specify the First Name or Given name for the user
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$GivenName,

        [Parameter(Mandatory)]
        #[PSTyepName('Microsoft.ActiveDirectroy.Management.ADUser')]
        [Object]$Instance,

        [Switch]$Passthru
        )

        TRY {
            if ( -not (Get-Module -Name ActiveDirectory) ) {
                # try to import the Module
                Import-Module -name ActiveDirectory -ErrorAction stop
                $null = Get-PSDrive -Name AD -ErrorAction stop  # Query if the AD PSdrive is loaded
            }
        }
        CATCH [System.IO.FileNotFoundException]{
            Write-Warning -Message $_.exception
            throw "AD module not found"
        }
        CATCH {
            throw $_.exception
        }        

        # Let's start by following the link : https://technet.microsoft.com/en-us/library/dd378959(v=ws.10).aspx
        $Instance.UserPrincipalName = $Null

        if ($Passthru.IsPresent) {  
            New-ADuser -Name $SamAccountName -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False -Passthru
        }
        else {
          $null =  New-ADuser -Name $SamAccountname -SamAccountName $SamAccountName -GivenName $GivenName -Instance $Instance -Enabled $False
        } 

}

Resources:

Preivous post
http://www.dexterposh.com/2015/09/powershell-ad-pester-create-new-user.html

https://msdn.microsoft.com/en-us/library/ms679765(v=vs.85).aspx

http://www.hurryupandwait.io/blog/why-tdd-for-powershell-or-why-pester-or-why-unit-test-scripting-language

https://www.simple-talk.com/sysadmin/powershell/practical-powershell-unit-testing-getting-started/#eleventh

Read this post by Jakub where he explained about CQS principle, which I intend to follow in my future practices.
http://powershell.org/wp/2015/10/18/command-and-query-separation-in-pester-tests/

PowerShell + SCCM : WMI Scripting

$
0
0
Why should I use WMI, when there is a PowerShell module available for Configuration Manager (CM Module) already?

Well the cmdlets behind the scene interact with the WMI layer and if you know which WMI classes the corresponding cmdlet work with , it can be of help in future by :


  1. Switching to native WMI calls when the CM cmdlets fail for some reason (probably bug in the CM Module).
  2. Making your scripts more efficient by optimizing the WMI (WQL query) calls, the cmdlet will query all the properties for an Object (select *) you can select only ones you need. 
  3. Lastly no dependency on the CM Module, you can run these automation scripts from a machine not having the CM console installed (needed for CM module).
Moreover ConfigMgr uses WMI extensively, you already have this knowledge leveraging it with PowerShell shouldn't surprise you. This post assumes you have been working with CM cmdlets (you already are versed with PowerShell), know where the WMI namespace for ConfigMgr resides and the basics of WMI.


Example Problem:


I will use one of the problem people have been commenting about a lot on the below post

PowerShell + SCCM 2012 R2 : Create an Application (from MSI) & Deploy it



What they want to do is specify multiple app-categories to an application while creating these apps using PowerShell ?

This seemed trivial at first as the help for the Set-CMApplication cmdlet which is used to set the app category for an application accepts a string array. Probably a bug in the cmdlet (as this seems to be working on the most recent CM module). See below the comment screenshot from the post :




This is strange.
So what you do now ? Don't worry you can at anytime fallback to using PowerShell and WMI  until the bug is fixed ( it seems to be fixed in the latest version).


So what I am going to show you now is 

  1. how to start exploring the WMI Class associated.
  2. Read the documentation.
  3. Use PowerShell + WMI to automate it.


For the above scenario,

I have an Application named Notepad++ and two application categories named "PSCreatedApps" and "OpenSource". I want to add these 2 categories to the application via the WMI only (remember my CM cmdlet has a bug).

Get the WMI Class name:


This shouldn't be too hard to find, in the post we were using the Set-CMApplication cmdlet to set multiple app categories. So the first and the easiest way to find the WMI Class you are playing with is the corresponding Get-CMApplication cmdlet (there are other ways to get this using ConfigMgr console too, find them).

Pipe the output of the Get-CMApplication cmdlet to Get-Member to see the WMI Class you have been fiddling with all along :



The typename says IResultObject#SMS_Application (not the WMI Object) because the CM cmdlets use the IResultObject interface to expose data for result objects (don't worry about that part much). SMS_Application is the WMI Class here.


Another way would be to closely observe the SMSProv.log when you execute the CM cmdlet, reading this log is of utmost importance when Scripting against ConfigMgr.

Well reading the SMSProv.log takes some time and practice but a good way is to dump the Verbose stream from the cmdlet. As this does show all the WQL query being run behind the scenes, so you can do a mapping between the SMSProv.log and understand what might be causing the failure.

Just to show how it is done, see the first Verbose stream message showing the WQL below is where it shows up in the log :

The log is a mine of information and one should invest time in interpreting it while playing with the Cmdlets, WMI (even the actions on the Console show up here, neat trick to get the explore too).


Read the WMI Class documentation

In the documentation for the SMS_Application class you will find that the base class for it is SMS_ConfigurationItemBaseClass . Base class means that SMS_Application (Child Class) class inherits from SMS_ConfigurationItemBaseClass. So we need to be actually looking for documentation on both classes.



Also do a search on the page for all the properties having word "Category" in it, below is a snip of all the properties from the page :

Now at this point we are looking to set a property on the application object which has something to do with the category. So only read/write properties should interest us, drop the LocalizedCategoryInstanceNames property from above :)
Take a moment to notice that read/write properties named CategoryInstance_UniqueIDs, PlatformCategoryInstance_UniqueIDs are pointing us to see the base class documentation (highlighted in yellow). Click on the link and it should take you to the base class, for both the properties you will see :


the documentation for the CategoryInstance_UniqueIDs looks promising. Observe that the Data Type is String Array , which clearly means more than one category unique ids can be assigned. But how do we find these category unique instance ids ?

Leaving the exercise to finding this via WMI only to you and taking a shortcut here by using Get-CMCategory cmdlet :

We have all the key pieces together :
  1. Class Name - SMS_Application
  2. Writable property corresponding to the Categories.
  3. Category Unique Ids for our 2 categories.

Let's get to the final phase of using WMI now purely to set 2 app category on the Application, below is the same operation done via WMI which was intended to be done by the Set-CMApplication cmdlet :
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
# Import-Module configurationmanager
# No need for this we are using WMI. Set the default parameters for the WMI Calls ( personal preference)

$PSDefaultParameterValues = @{
    'Get-WMiObject:ComputerName'='DexSCCM'; # point the Get-WMIObject to the ConfigMgr server having WMI namespace installed
    'Get-WMiObject:NameSpace'='root/SMS/Site_DEX'; # Point to the correct WMI namespace for CM automation
}

# get the SMS_Application object instance for application Notepad++
$Application = Get-WmiObject -Query "SELECT * from SMS_Application WHERE LocalizedDisplayName = 'Notepad++' AND IsLatest = 1 AND IsHidden = 0"

# Get the UniqueIds for the categories - PSCreatedApps and OpenSource
$CategoryIDs = Get-WmiObject -Query "SELECT CategoryInstance_UniqueID FROM SMS_CategoryInstance WHERE CategoryTypeName='AppCategories' and LocalizedCategoryInstanceName in ('PSCreatedApps','OpenSource')" |
                    Select-Object -ExpandProperty CategoryInstance_UniqueID

# Let's modify the Object in the memory
$Application.CategoryInstance_UniqueIDs = $CategoryIDs

# Sync the changes to the ConfigMgr Server
$Application.Put()

ET Voila ! (check the SMSProv.log too when you use this way to troubleshoot).

Below is a gif showing this in action, I tried showing that all things done via the Console, CM Cmdlet or PowerShell actually interface with WMI layer (all actions get recorded in the SMSProv.log) :




Looking for more ConfigMgr + PowerShell stuff, below is link to all my posts around the topic :
http://www.dexterposh.com/p/collection-of-all-my-configmgr.html


Resources :

My friend Stephane has few posts talking about troubleshooting WMI functions and a list of things you need to know when scripting against the SCCM WMI provider.
http://powershelldistrict.com/troubleshoot-wmi-functions/

http://powershelldistrict.com/top-6-things-you-need-to-know-when-scripting-with-sccm/

PowerShell + SCCM : Run CM cmdlets remotely

$
0
0
Today I saw a tweet about using implicit remoting to load the Configuration Manager on my machine by Justin Mathews. It caught my eye as I have never really tried it, but theoretically it can be done.




Note - The second tweet says "Cannot find a provider with the name CMSite", resolution to which is in the Troubleshooting section at the end.



Use Import-Module


Below is a video showing how you can use Import-Module as mentioned in the tweet, if everything goes without any errors :


Below is the code snippet used in the video :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
# Create the PSSession
$Session = New-PSSession -ComputerName sccm

# Load the CM Module using Implicit Remoting
Import-Module -Name "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -PSSession $Session

# Check the module is available locally
Get-Module -Name ConfigurationManager

# run the CM cmdlets locally
Get-CMSite

# Oops! We need to set the CMSite as our current location to run the CM cmdlets
Invoke-Command -Session $Session {Set-Location -Path DEX:}

# Run the cmdlet again
Get-CMSite


Use Export-PSSession

Now one can easily be tempted to run the Export-PSSession and store the module locally and next time just import the earlier exported module and start using the CM cmdlets, but it is not that straight forward at-least with the CM cmdlets. 

But there is a way to run the exported cmdlets (or Proxy Functions) by explicitly loading the CM module on the remote session and changing to CMSite provider as the current location ( which I think is pointless as the exported Module should be doing it).

Below is a video showing how to do that :

`



Below is the first code snippet (as in above video) used in first PowerShell ISE tab :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
# Create the PSSession
$Session = New-PSSession -ComputerName sccm

# Load the CM Module in the Remote PSSession and change the current location to your CMSite
Invoke-Command -Session $Session {
                                    Import-Module -Name "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1"
                                    Set-Location -path DEX:
                                    }

# Export the Module Locally
Export-PSSession -OutputModule RemoteCMModule -Session $Session -Module ConfigurationManager

# This will work in the current PowerShell session
Import-Module RemoteCMModule

# run the CM cmdlets
Get-CMsite

Now on another ISE Tab, we have to do below :

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023

# check the current PSSessions
Get-PSSession

# On a new PowerShell session , Import the module
Import-Module -Name RemoteCMModule

# try running a CM cmdlet now
Get-CMsite

# check the current PSSessions
$session = Get-PSSession

# One has to explicitly run this
Invoke-Command -Session $Session {
                                    Import-Module -Name "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1"
                                    Set-Location -path DEX:
                                    }

# try running a CM cmdlet now
Get-CMsite



Use Invoke-Command

One can always just simply use PowerShell remoting like below :

001
002
003
004
005
006
007
008
009
010
011
012
013

# Create the PSSession
$Session = New-PSSession -ComputerName sccm

# Load the CM Module in the Remote PSSession and change the current location to your CMSite
Invoke-Command -Session $Session {
                                    Import-Module -Name "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1"
                                    Set-Location -path DEX:
                                    }

# run the Cmdlets remotely
Invoke-Command -Session $Session {Get-CMSite}



TroubleShooting:

One must remember that even though you are using a PSSession, the ConfigurationManager module should be loaded in the PSSession and your current location should be in the CMSite PSProvider in order to run the CM cmdlets.


You may come across a scenario (similar to mentioned in the tweet) where the CMSite PSProvider might not have loaded correctly when you connected to the PSSession. You can always load the CMSite PSDrive manually , see below video where I deliberately remove the CMSite PSDrive and map it again :

Note - Did you notice, I later used ABC as the name of the PSDrive instead of my CMSite DEX ;)

Hope this gives you a better insight on running CM cmdlets remotely. One has to understand that Remoting should be enabled on the CM server where you are connecting.

PowerShell : Retry logic in Scripts

$
0
0
One of my projects required me to copy a CSV file (important step) to a VM running on Server 2012 R2.
I’ve found this excellent tip by Ravi onusing Copy-VMfile cmdlets in Server 2012 R2 Hyper-V. To use this cmdlet, I had to enable "Guest Service Interface" component in the Integration Services (below is what documentation says about the service).

This new component in the Integration Services allows copying files to a running VM without any network connection (How cool is that?).

The tip mentioned earlier talks about how to enable the component using Enable-VMIntegrationService, but there is a delay between enabling the component and successfully using the Copy-VMfile cmdlet. 

So how do I go about making sure that the service is running before the cmdlet is issued, or keep retrying the cmdlet until it succeeds ?





Simplest way would be to use Start-Sleep to induce a delay and take a guess that the service would be running by the time cmdlet executes, like done in below function definition:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
Function Copy-ImportantFileToVM {
    [CmdletBinding()]
    param($VMName)

        $VM = Get-VM -Name $VMName
        #Check if Guest Integration Service is enabled
        $GuestService = $VM.VMIntegrationService.Where({$_.Name -eq 'Guest Service Interface'})
        if (-not $GuestService.Enabled) {
            #Enable the GSI
            $GuestServiceStatus = $VM | Get-VMIntegrationService -Name "Guest Service Interface" | Enable-VMIntegrationService -Passthru
            if (-not $GuestServiceStatus.Enabled) {
                throw "Couldn't enable Guest Service Interface"
            }
        }    
        # Induce sleep in the script for 120 seconds just to be sure
        Start-Sleep -Seconds 120
        # Critical Step -> Copy test CSV to VM
        if (Test-Path -Path "$PSScriptRoot\test.csv") {
           TRY {
                $CopyFileHash = @{
                    Name=$VMName;
                    SourcePath="$PSScriptRoot\test.csv";
                    DestinationPath='C:\temp\test.csv';
                    FileSource='Host';
                    CreateFullPath=$true;
                    Force = $true;
                    Verbose=$true;
                    ErrorAction='Stop';
                }
                Copy-VMFile  @CopyFileHash
                }
            CATCH {
                # Put error handling here - maybe log it
                $PSCmdlet.ThrowTerminatingError($PSItem)
            }
        } # end if
}

That brings to another question-- what if the delay put is not enough or is it too much?

Similarly, a more practical use case is for the Azure cmdlets which make the REST API calls behind the scenes, what if while calling one of the REST endpoint the network fluctuated and the cmdlet failed at a critical step. 

Bottom line is I am really looking for a retry logic within my scripts, so that my code retries an important step for few times before it dies out.

While researching around the topic found this articleby Pawel& this articleby Alex on retry logic in PowerShell. Using these as a reference, I wrote the below function named Invoke-ScriptBlockWithRetry


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
function Invoke-ScriptBlockWithRetry
{
<#
.Synopsis
   Invokes a script block with resiliency.
.DESCRIPTION
   The function takes a script block as a mandatory argument and tries to run it certain number of times (argument to -MaxRetries).
   It delays execution between subsequent[AN1] [DD2] retries by 10 seconds (default), can be passed a custom value to –RetryDelay parameter.
.EXAMPLE
   First create a script block with -ErrorAction set to Stop and then pass it to the function
   PS> $CopyLambda = {Copy-Item -Path \\fileserver\Info\test.csv -Destination C:\Temp -ErrorAction Stop}
   PS> Invoke-ScriptBlockWithRetry -Command $CopyLambda1 -MaxRetries 5 -Verbose
.EXAMPLE
   Script blocks have access to the current scope variables, so if you set a variable in the current scope, you can use that within the script block
   PS> $name = 'notepad'
   PS>Invoke-ScriptBlockWithRetry -Command {Get-Process -Name $name -EA Stop} -MaxRetries 5 -Verbose

.NOTES
   Credits
   Inspired by -
        1. http://www.pabich.eu/2010/06/generic-retry-logic-in-powershell.html
        2. http://www.alexbevi.com/blog/2015/02/06/block-retry-using-powershell/
#>

    [CmdletBinding()]
    [OutputType([PSObject])]
    Param
    (
        # Param1 help description
        [Parameter(Mandatory=$true,
                   ValueFromPipelineByPropertyName=$true,
                   Position=0)]
        [System.Management.Automation.ScriptBlock]
        $ScriptBlock,

        # Number of retries. Default is 10.
        [Parameter(Position=1)]
        [ValidateNotNullOrEmpty()]
        [int]$MaxRetries=10,

        # Number of seconds delay in retrying. Default is 10 seconds.
        [Parameter(Position=2)]
        [ValidateNotNullOrEmpty()]
        [int]$RetryDelay=10
    )

    Begin
    {
        Write-verbose -Message "[BEGIN] Starting the function"
        $currentRetry = 1
        $Success = $False
    }
    Process
    {
        do {
           try
            {
                Write-Verbose -Message "Running the passed script block -> $($ScriptBlock)"
                $result = & $ScriptBlock # invoke the script block
                $success = $true
                Write-Verbose -Message "Script block ran successfully -> $($ScriptBlock)"
                return $result
            }
            catch
            {
                $currentRetry = $currentRetry + 1              
                Write-Error -Message "Failed to execute -> $($ScriptBlock) .`n Error-> ($_.Exception)"  # Write non-terminating error for allowed retries
    
                if ($currentRetry -gt $MaxRetries) {         
                    # If the current try count has exceeded maximum retries, throw a terminating error and come out. In place to avoid an infinite loop 
                    Write-Warning -Message "Could not execute -. $($ScriptBlock).`n Error: -> $($_.Exception)"
                    $PSCmdlet.ThrowTerminatingError($PSitem) # Raise the exception back for caller. This is a terminating error as the retries have exceeded MaxRetries allowed.
                }
                else {
                    Write-verbose -Message "Waiting $RetryDelay second(s) before attempting again"
                    Start-Sleep -seconds $RetryDelay
                }
            }
        } while(-not $Success) # Do until you succeed
    }
    End
    {
        Write-verbose -Message "[END] Ending the function"
    }
}

Now the trick to using this function in your scripts is to pass it a script block with -ErrorAction set to Stop for the steps you think can probably fail and you want them to be retried.

Let’s rewrite our Copy-ImportantFileToVM function using the Invoke-ScriptBlockWithRetry :


001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
Function Copy-ImportantFileToVM {
    [CmdletBinding()]
    param(
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$VMName
        )

        $VM = Get-VM -Name $VMName
        #Check if Guest integration Service is enabled
        $GuestService = $VM.VMIntegrationService.Where({$_.Name -eq 'Guest Service Interface'})
        if (-not $GuestService.Enabled) {
            #Enable the Guest Integration Service
            $GuestServiceStatus = $VM | Get-VMIntegrationService -Name "Guest Service Interface" | Enable-VMIntegrationService -Passthru
            if (-not $GuestServiceStatus.Enabled) {
                throw "Couldn't enable Guest Service Interface"
            }
        }

        $CopyFileHash = @{
                            Name=$VMName;
                            SourcePath="$PSScriptRoot\test.csv";
                            DestinationPath='C:\temp\test.csv';
                            FileSource='Host';
                            CreateFullPath=$true;
                            Force = $true;
                            Verbose=$true;
                            ErrorAction='Stop';
                        }
        # Copy test CSV to VM
        if (Test-Path -Path "$PSScriptRoot\test.csv") {
            # Critical step to copy the CSV, I want it to be retried
            Invoke-ScriptBlockWithRetry  { Copy-VMFile @CopyFileHash }
            # Non-critical step, just an example -> I don't care if notepad is running
            Get-Process -Name notepad -ErrorAction SilentlyContinue
        } # end if

}



If you have a good eye, you would have noticed the non-critical step placed in the script block. I just put it to show that it is possible to have steps within your script block which you don’t care if they throw an exception (-ErrorAction SilentlyContinue will suppress error messages for the non-critical step).

Note that I can also pass the number of maximum retries to be done along with wait interval (in seconds) between the retries. Now you can be very creative and extend this as per your needs in various scenarios.

Have fun exploring!
Viewing all 97 articles
Browse latest View live