When It’s Hard… Do It More

As the buzz around “devops” starts to decline, we have a few lessons learned that we must not lose. My personal favorite “When it’s hard, do it more”. I stole from the DevOps Handbook. This is a must read and one of my all time favorite work-related books, if you haven’t already done so, read it!

When it’s hard do it more

Every time something unexpected happens during a planned (or unplanned) maintenance operation, I urge the team involved to do it again. Immediately. After there’s time to understand what happened but before the information gets stale. If you take an outage, make it a valuable one! Learn, as an individual how not to do that again. Learn, as a team by updating processes or adding additional logic to trap the error faster. Learn, as an organization by updating documentation and/or SOP’s around what was learned. Finally, do it again! It sounds crazy but the only way to get better is to practice. How else will you know if you’ve updated the process & documentation correctly?

Three areas off the top of my head that are hard, but critical you find the time to do more of.

Roll Credentials. Regularly.

Whether your focus is front end, back end, both, or other… Security is probably the most important thing we can all do for ourselves and for our organizations. Even with managed services or managed secrets (Azure Key Vault, AWS KMS), all organizations have keys that need to be rotated. Rolling these keys is our first line of defense against intruders, both internally and externally. Ideally, we have an automated process that rolls all keys [on demand | daily | weekly | whatever makes sense], but even if the process is all manual, that’s ok!! What’s important is having the process to roll keys in the first place.

Rolling keys should be done regularly. Everyone should be trained in how to roll keys and everyone practice rolling all keys they are responsible for. Rolling keys should be easy, until then… if it’s hard, do it more. It will make the process better, faster, stronger all while improving the security of the organization. With security, there are no second chances!

Write Good Tests

Writing good tests is an art. If you do not have a system set up to test code on commit & push, get it done! Jenkins is free, well-documented, runs on any OS, easy to setup and maintain in a small environment (less than 50 people) and quickly return the investment made to get it working. You will wonder how you ever functioned without it after you’re up and running.

All code should be paired, in some way with an automated test. Many times we’re testing our code in multiple ways (unit, functional, integration, regression, performance). I even see more test code getting tested in an automated fashion to ensure it’s functioning under the current assumptions. Regardless of the technology, the type of testing chosen, or the amount of testing performed, all code should be tested with every commit/push. There are millions of great articles around testing. You can find articles that are targeted for all experience levels and for all languages.

Writing good tests today prevent future you from breaking your stuff. It may be hard to start testing if you have never tested code before, but trust me when I say that it gets easier with practice, and of course, if it’s hard, do it more. 

Give Back

Leaving this bucket generic. I was originally going to call out writing good documentation around your code and processes. But after thinking about it, I’m adding the community to this section. We should all be blogging or contributing to open source projects regularly. Whatever you can spare to do your part to give back to the community. You may think no one wants to read your stuff, but you’d be wrong. If you hit a weird bug somewhere, even if you don’t have the solution, write it down somewhere. Someone may find on a late-night hasty Google search. You know the ones — page 5 of the results because you are desperate. You can contribute to open source projects without knowing how to code! Issues can be filed. Documentation can be updated. In an open source world, the littlest bit from a lot of people drives the entire ecosystem forward.

Writing good documentation is also another art form. Too much documentation and no one will read it. Too little documentation and no one will use your stuff. It takes practice to write good documentation and it requires a feedback cycle of asking the reader how you can improve. This applies to SOP‘s too. It’s never enough to write documentation once and never come back to it. It has to be maintained along with your code so that it’s up to date and trustworthy.

Whether it giving back to the community or it’s writing documentation around your code, processes, tests, whatever…. Give back thru better writing. Give back to yourself, to your team, to your organization, and to the community. It’s the community service of our time. I know that sometimes it’s not always easy to do or clear where to start BUT (for the final time)… if it’s hard, do it more. 

Conclusion

Thank you for reading! These opinions are mine and mine alone. I hope you liked the post, it was a little different than the typical technical, subject focused write up. If you have questions or want to get a hold of me leave a comment below or tweet me (@NickHudacin)!

 

Advertisements

Mocking AzureRm Commands with Pester. New-AzureRmDnsRecordSet

Most of my readers already know how much I love Pester. Hi mom! Ha. I can’t get enough of it. But mocking some of the AzureRm commands can be exceedingly painful. A lot of the AzureRm commands require specific object types to be passed thru, even when trying to mock them. Creating those specific object types is a lot of trial and error. Even though my Google-Fu is quite strong, I seemed to strike out when searching for how others were mocking these commands. So to help my fellow engineers, I’ve decided to start posting how I was able to mock some of the more finicky AzureRm modules as I come across them.

 

Mocking New-AzureRmDnsRecordSet was a bit of a hurdle. We need to pass in a valid DNS record set. Returning $null doesn’t work. I originally thought I would just let New-AzureRmDnsRecordConfig to run with a dummy value (not mock it at all) but for whatever reason, it requires you to login to the portal!

This is what I ended up with:
function Add-CustomDnsRecord {
  $params = @{
    Name = 'custom-a-record'
    RecordType = 'A'
    ResourceGroupName = 'myresourcegroupname'
    TTL = 60
    ZoneName = 'internalDNS'
    DnsRecords =  New-AzureRmDnsRecordConfig -Ipv4Address 127.0.0.1
  }

  New-AzureRmDnsRecordSet @params
}

Describe Add-CustomDnsRecord {
  Mock New-AzureRmDnsRecordSet { }
  Mock New-AzureRmDnsRecordConfig {
    return @(
      New-Object Microsoft.Azure.Commands.Dns.ARecord
      New-Object Microsoft.Azure.Commands.Dns.CnameRecord
    )
  }
  It 'mocks correctly' {
    Add-CustomDnsRecord
    Assert-MockCalled New-AzureRmDnsRecordSet -Exactly 1 -Scope It
  }
}
 dns_mock_ss

 

 

Till the next command!

Thanks for reading, please post any comments or questions below.

Local Jenkins Master + Slave with Docker

Today, I’ll be adding to my last post about using Docker and Jenkins to have a quick local Jenkins instance locally. The only thing I would recommend changing is to mount a local folder as your Jenkins home directory. This saves your state and doesn’t require you to reconfigure and install your plugins each time. Helpful if you plan to use this occasionally. What I’ll be adding to the original post is how to add your local laptop or workstation as a build slave to the Jenkins master container. Super easy to do and very helpful testing little things (looking at you $LASTEXITCODE) before you commit & push.

 

 

I have already allowed Docker to mount a volume on my laptop. This is in the options menu. Let’s get my Jenkins master container going. Remember, I’m going to mount a local volume to use as the container’s jenkins_home directory. Locally, this is just a directory I created for messing around. I won’t put anything in here that I can’t lose.
docker run -p 8080:8080 -p 50000:50000 -v C:/_docker/jenkins:/var/jenkins_home jenkins/jenkins:lts

If this is your first time setting Jenkins up, make sure to browse my previous post for any manual configurations. You will also need to Install Java. Any old Java will do, I don't think you need the SDK version or anything. Make sure your Java installation is available on your $env:PATH. Anyway, once the Jenkins master comes up, navigate to http://localhost:8080. You can grab your initial admin token from $jenkins_home/secrets/initialAdminPassword with $jenkins_home being the volume configured in the docker run command.

Add a new node:

Jenkins Home >> Manage Jenkins >> Manage Nodes >> New Node

new_node_screen

The new node will be your workstation but it can be configured like a real build agent. You can set labels, executors, environment variables. You will need a slave workspace. I threw mine alongside my $jenkins_home directory.
new_node_config
Now that I have a new node created, I just need to bring it online. Finding the screen is always a little tricky but clicking the offline node object in the home page:
node_config_pane_view
Will display some options to bring the slave online. I think the easier is using the "Launch agent from browser" option. This downloads a .jnlp file which specific configurations for connecting to your Jenkins master instance. You could easily run the slave.jar file if you have a specific version you need.
new_node_launch_command
The new node should be online and ready to use. Because I have added a "powershell" tag to my node configuration for this new slave, I can create a new test pipeline job and run:
node('powershell') {
    stage('Execute Batch') {
        bat 'powershell -command "echo something awesome"'
    }
}
And of course when you get this error you can adjust quickly (because you have a local Jenkins instance to mess around with):
Started by user admin
[Pipeline] node
Running on windows_slave in C:\_docker\jenkins_slave\workspace\test-1
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Execute Batch)
[Pipeline] bat
[test-1] Running batch script
C:\_docker\jenkins_slave\workspace\test-1>powershell -command "echo something awesome"
. : File C:\Users\nhudacin\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 cannot be loaded because
running scripts is disabled on this system. For more information, see about_Execution_Policies at
http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ . 'C:\Users\nhudacin\Documents\WindowsPowerShell\Microsoft.PowerShell ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
something
awesome
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
node('powershell') {
    stage('Execute Batch') {
       bat 'powershell -ExecutionPolicy ByPass -command "echo something awesome"'
    }
}
Pretty cool, right!? Thanks for the read, any questions, comments, or corrections please feel free to comment below!

VS Code + Integrated Console + Pester = No Excuse Testing

Following up on an earlier post where I lightly touch on using the PowerShell Extension for the VS Code editor, I’d love to share how easy it is to Pester test and hopefully convince you to start testing your code today! If you’re not familiar with me, I’m a huge fan of Pester. I mean this is such a great module that it’s baked into Windows! I Pester test EVERYTHING. Even things outside of my custom PowerShell functions… ETL executions, Jenkins job statuses, all kinds of things. I did a post about Pester testing SQL Server stuff, check it out if you haven’t seen it!

Using VS Code and the PowerShell extension, writing Pester tests has never been easier!!

(more…)

Using the PowerShell Integrated Console in VS Code

VS Code is my absolute favorite editor. It has been my GoTo IDE for Chef cookbook development for some time. I was hooked when I tried the PowerShell Extension as my default terminal (ctrl + `) and saw my profile stuff (err – PSReadLine) all loaded up:

terminal_profile_awesome

Then to try out the 0.10.0 release. Wow, completely blown away! We now have a fully integrated console to run and debug PowerShell scripts. Many thanks to  David Wilson and Keith Hill for this awesome stuff.

Let’s take a look at the integrated console.

interactive_console

Just like PowerShell ISE, I can highlight a hash table and the shortcut for “Run Selection” (F8) and jump down to the console to check the variable values. This is really useful for stepping thru some code or building out new code.

All of the commands I’ve tried so far seem to work nicely. This will for sure replace PowerShell ISE for me.

$settings = @{
  hashkey = 'value'
}

(0..100) | 
  Get-Random -Count 10 | 
    %{ Write-Output "Hello, this is $_ reporting..." }

Exploring Jenkins DSL – Preface: Up and Running with Docker & Jenkins

While working on my multi-part series covering the Job-DSL plugin for Jenkins, I wanted to get my own Jenkins instance stood up so that I could test & play without affecting any of our shared instances. That’s exactly what’s being covered here. Getting a brand new, personal Jenkins instance up and running so that we can test DSL code without affecting anyone else. I’m going to be using Docker for this walk thru and I can’t believe how easy/simple the whole process was!! So please, if you’re following along in my DSL exploration – start here and get your own environment to destroy!!

Assuming you have Docker already installed, let’s go ahead and pull the latest Jenkins image which amounts to just this line of code:

docker pull jenkins

 

docker_pull_jenkins

And once the image is available locally, we can go ahead and run it like this:

docker run -p 8080:8080 -p 50000:50000 jenkins

In the latest version (I think all versions starting at 2.x), Jenkins requires an admin password. The initial admin password will be located in the output somewhere. I've highlighted it here, although yours will absolutely be different:

docker_run_admin_password

Copy the initial admin password because you'll need it when trying to login to your Jenkins instance. My PowerShell session isn't returned to me, but that's ok. When I see the line "setting agent port for jnlp" I know that I'm ready to go. In my browser, all I need to do is navigate to localhost:8080 where I should see a Jenkins login screen. Remember that admin password you copied to your clipboard? This is where you'll use it. At the next screen, I'm just going to select "Install Suggested Plugins" for this demo:

jenkins_customize_plugins

After the recommended plugins are installed, I'll install the job-dsl & greenballs plugins via the "Manage Jenkins" >"Manage Plugins" screen. Greenballs? Why must we have that for the demo? Because, in my mind, a representation of a successful job is (and always will be) green. I just wish they'd change the default color of successful runs from blue to green so I didn't need to install an additional plugin.

jenkins_install_plugin

That's it! Surprisingly, I was up and running in just a few minutes with very little Docker experience!  I slotted a couple of hours for this exercise figuring the learning curve on Docker alone would drag me down a bit. Nope! Now I have a sandbox Jenkins instance to experiment on without risking a bad script taking down my production Jenkins instance (or even my test instance).

Use PowerShell to Generate Chef Checksums

Hit an exciting milestone in my career today… Performance tuning a Chef cookbook! When we started to get serious about infrastructure automation, I never imagined how hard it was gonna be or how long it was gonna take. But here I am, finally caring how long my chef client runs are taking. The soft silhouette begins to appear in the way off distance… A vision of managed infrastructure. Clusters of machines talking to each other, taking ownership when leaders fail. Reporting back when something isn’t right or automatically adjusting to fit the demand. The possibilities are endless! We’re not quite there but we will be, I have no doubt.

So you wanna know how the performance tuning went? I took a chef client run down from around  5 minutes to under 30 seconds and it was stupid simple! So simple, I wanted to share it. Just adding checksum values to all of the remote_file, cookbook_file, windows_package, seven_zip_archive, and all the other resources where checksum is a property. At this point, if you’re thinking “well why wouldn’t you already have the checksum values?” then you’re probably not doing cookbook development on a Windows machine. To get a SHA-256 checksum value on Windows 7 (most every enterprise machine today)… you needed to use certutil . Look at this crap:

λ certUtil -hashfile desktop.ini SHA256
SHA256 hash of file desktop.ini:
aa 97 c6 bb 5c a4 e0 fb 64 60 3e ed ba de 3a 00 39 33 b6 e5 7a dc fa 57 e6 4b 7b a1 32 c5 4b cf
CertUtil: -hashfile command completed successfully.

What the Fxxx am I supposed to do with that!? Write some cmd/bat script to take out the spaces? Too lazy. Probably could do a PowerShell one-liner...

PS C:\_source\git> (certUtil -hashfile desktop.ini SHA256)[1].Replace(' ','')
aa97c6bb5ca4e0fb64603eedbade3a003933b6e57adcfa57e64b7ba132c54bcf

Yea that would work. I'm lucky enough to be on Windows 10 so I have access to the v4+ Get-FileHash function. If you're on Windows 7 and haven't grabbed PowerShell v5 yet... seriously, go do it! This is how I did it with Get-FileHash:

PS C:\_source\git> (Get-FileHash Desktop.ini -Algorithm SHA256 | Select -ExpandProperty Hash).ToLower()

So that's it.. I went thru and generated checksum values for all the declared resources and I am very pleased with the results! It seems so obvious now and in hindsight all of our cookbook resources should have already included the checksum values. It wasn't too long ago where getting a new cookbook to run successfully in all of our environments was miracle enough. While iterating thru several re-writes, it never made sense to do the extra step. But now, with some cookbooks being applied every few minutes... this stuff matters!

On to the next challenge... Resolving cookbook dependency issues!

As always please feel free to comment below or reach out to me on Twitter (@nhudacin) with any questions. Happy coding!