tag:blogger.com,1999:blog-89425992024-02-25T16:13:16.215-05:00the urban canuk, ehLiving idyllically in a .NET, C#, TDD worldbryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.comBlogger279125tag:blogger.com,1999:blog-8942599.post-36248358380513616882022-01-20T23:55:00.000-05:002022-01-22T17:28:54.574-05:00Invoking Azure DevOps Hidden REST API
<a data-flickr-embed="true" href="https://www.flickr.com/photos/eskimo_jo/17358896716/in/photolist-DnqGb1-DD8hKU-HR81Z3-FDeDaD-srWTdG-rg3HWD-DHSjFL-24DFA5N" title="GEMS"><img src="https://live.staticflickr.com/8883/17358896716_7c6f36ab29_c.jpg" width="800" height="504" alt="GEMS"></a>
<script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<p>Azure DevOps has a great REST API, but every now and then there are some things you can do in the user-interface that you can’t do from the REST API.</p>
<p>But did you know Azure DevOps has a hidden API?</p>
<h2 id="wait-hidden-api">Wait, Hidden API?</h2>
<p>Technically, it’s not really a hidden API. Like most modern web applications, Azure DevOps exposes a set of REST endpoints to their user-interface that is different than the official REST API. These endpoints follow a REST convention and use the same authentication scheme, so you can use them interchangeably from a non-interactive script.</p>
<p>While the Azure DevOps REST API is versioned, documented and supported by Microsoft, these endpoints are not so there’s no guarantee that they won’t change in the future, so buyer beware. Here be dragons, etc.</p>
<p>If you’re okay with this and you’re open to some experimentation, here’s how you can go about finding these endpoints and including them in your scripts.</p>
<h2 id="use-the-inspector">Use the Inspector</h2>
<p>It should come as no surprise that you need to use your browser’s Developer Tools to inspect the DOM and the Network Traffic while using the feature you’re interested in.</p>
<p>As an example, there isn’t a great API that can give you the list of builds and their status by stage. You can query the list of Builds and then use Timeline API for each build to get the outcome of the stages, but it’s really heavy and chatty. However, the user-interface for build history however easily shows us the build history and the stage status. So let’s use see how the user-interface gets its data:</p>
<ol type="1">
<li>Navigate to the pipeline and open the Developer Tools (CTRL + SHIFT + I)</li>
<li>Switch to the Network panel</li>
<li>In the Filter, select <em>Fetch/XHR</em></li>
<li>Slowly scroll down until you see new requests being submitted from the page</li>
</ol>
<p>Few things to note:</p>
<ul>
<li>Requests are POST</li>
<li>Endpoint is <code>/<organization>/_apis/Contribution/HierarchyQuery/project/<project-id></code></li>
<li>The payload of the request contains a <code>contributionIds</code> and details that represent the current context</li>
</ul>
<pre class="#brush:ps;toolbar:false;gutter:false;">{
"contributionIds": [ "ms.vss-build-web.runs-data-provider"],
"dataProviderContext": {
"properties": {
"continuationToken":"2021-10-08T15:53:40.2073297Z",
"definitionId":"1324",
"sourcePage": {
"url":"https://dev.azure.com/myOrganization/myProject/_build?definitionId=1234",
"routeId":"ms.vss-build-web.pipeline-details-route",
"routeValues":{
"project":"myProject",
"viewname":"details",
"controller":"ContributedPage",
"action":"Execute",
"serviceHost":"<guid>"
}
}
}
}
}</pre>
<ul>
<li>The response object has <code>data-providers</code> object that corresponds to the <code>contributionIds</code> value specified above.</li>
</ul>
<pre class="#brush:ps;toolbar:false;gutter:false;">{
"data-providers": {
"ms.vss-build-web.runs-data-provider" : {
"continuationToken": "<date-time>",
"runs": [
{
"run": {},
"stages": []
}
]
}
}
}</pre>
<p>Using the above syntax, you could in theory replicate any UI-based function by sniffing the contributionId and payload.</p>
<h2 id="magic-parameters">Magic Parameters</h2>
<p>Now, here’s the fun part. I haven’t tested this trick for all endpoints, but it works for the majority that I’ve tried. Take any endpoint in Azure DevOps and tack on the following querystring parameters: <code>__rt=fps&__ver=2.0</code></p>
<p>If we stick with our example of viewing Build History for a YAML Pipeline. We can take this url:</p>
<ul>
<li><code>https://dev.azure.com/<organization>/<project>/_build?definition=<build-defintion></code></li>
</ul>
<p>Then tack on our magic parameters:</p>
<ul>
<li><code>https://dev.azure.com/<organization>/<project>/_build?definition=<build-defintion>&__rt=fps&__ver=2.0</code></li>
</ul>
<p>And immediately, you’ll notice that JSON is rendered instead of the UI. This JSON has a lot of data in it, but if we narrow on the details of the <code>ms.vss-build-web.runs-data-provider</code> you’ll notice the same data as what’s returned from the <code>POST</code> above:</p>
<pre class="#brush:ps;toolbar:false;gutter:false;">{
"fps": {
"data-providers": {
"data": {
"ms.vss-build-web.runs-data-provider": {
"continuationToken": "<date-time>",
"runs": [
{
"run": [],
"stages": []
}
]
}
}
}
}
}
}</pre>
<h2 id="paging-results">Paging Results</h2>
<p>The Azure DevOps UI automatically limits the amount of data to the roughly the first 50 records, so if you need to fetch all the data you need to combine the two approaches:</p>
<ul>
<li>Query the endpoint with <code>GET __rt=fps</code> value to obtain the first set of data and the continuationToken</li>
<li>Subsequent requests use <code>POST</code> and pass in the continuationToken with the body.</li>
</ul>
<h2 id="a-wrapper-for-invoke-restmethod">A wrapper for Invoke-RestMethod</h2>
<p>Just like the title of this post says, we’re going to access this API from PowerShell.</p>
<p>So let’s start with the following wrapper around <code>Invoke-RestMethod</code> that contains the logic of communicating with either of the two approaches listed above:</p>
<pre class="#brush:ps;toolbar:false;gutter:false;">function Invoke-AzureDevOpsRestMethod
{
param(
[Parameter(Mandatory)]
[string]$Organization,
[Parameter(Mandatory)]
[string]$ProjectName,
[Parameter()]
[ValidateSet("GET","POST","PATCH","DELETE")]
[string]$Method = "GET",
[Parameter(Mandatory)]
[string]$AccessToken,
[Parameter()]
[string]$ApiPath = "",
[Parameter()]
[hashtable]$QueryStringParameters = [hashtable]@{},
[Parameter()]
[string]ApiVersion = "6.0-preview",
[Parameter()]
[string]$ContentType = "application/json",
[Parameter()]
[psobject]$Body
)
$QueryStringParameters.Add("api-version", $ApiVersion)
$queryParams = $QueryStringParameters.GetEnumerator() |
ForEach-Object { "{0}={1}" -f $_.Name, $_.Value} |
Join-String -Separator "&"
if ([string]::IsNullOrEmpty($ApiPath)) {
$uriFormat = "https://dev.azure.com/{0}/_apis/Contribution/HierarchyQuery/project/{1}"
} else {
$uriFormat = "https://dev.azure.com/{0}/{1}/{2}?{3}"
}
$uri = $uriFormat -f $Organization, $ProjectName, $ApiPath, $queryParams
$uri = [Uri]::EscapeUriString($uri)
$header = @{ Authorization = "Basic {0}" -f [Convert]::ToBase64String([Text.Encoding]::ASCI.GetBytes(":$($AccessToken)")) }
$invokeArgs = @{
Method = $Method
Uri = $uri
Headers = $header
}
if ($Method -ne "GET" -and $null -ne $Body) {
$json = ConvertTo-Json $Body -Depth 10 -Compress
$invokeArgs.Add("Body", $json)
$invokeArgs.Add("ContentType", $ContentType)
}
Invoke-RestMethod @invokeArgs
}</pre>
<h2 id="error-handling-and-retry-logic">Error Handling and Retry Logic</h2>
<p>No REST API is guaranteed to return results 100% of the time. You’ll want to accomodate for intermittent network issues, minor service interruptions or request throttling.</p>
<p>Let’s further improve upon this implementation by adding some error handling and retry-logic by leveraging some retry logic that is built into the <code>Invoke-RestMethod</code> cmdlet. When the <code>MaximumRetryCount</code> parameter is specified, <code>Invoke-RestMethod</code> <a href="https://github.com/PowerShell/PowerShell/blob/7dc4587014bfa22919c933607bf564f0ba53db2e/src/Microsoft.PowerShell.Commands.Utility/commands/utility/WebCmdlet/Common/WebRequestPSCmdlet.Common.cs#L1346">will retry the request if the status code is 304, or between 400-599</a>.</p>
<p>This should work for the majority of throttling issues but it also means that unauthorized requests (403) will be attempted multiple times. In my opinion, if your credentials are wrong then it won’t matter how many times you try, so repeating these requests will ultimately just make your script slower. For the purposes of this post, I’m willing to live with that but if that’s not for you, you might want to roll your own retry loop.</p>
<pre class="#brush:ps;toolbar:false;gutter:false;">$maxRetries = 3
$retryInterval = 10
try
{
Invoke-RestMethod @invokeArgs -RetryIntervalSec $retryInterval -MaximumRetryCount $maxRetries
}
catch
{
if ($null -ne $_.Exception -and $null -ne $_.Exception.Response) {
$statusCode = $_.Exception.Response.StatusCode
$message = $_.Exception.Response.Message
Write-Information "Error invoking REST method. Status Code: $statusCode : $message"
Write-Verbose "Exception Type: $($_.Exception.GetType())"
Write-Verbose "Exception StackTrace: $(_$.Exception.StackTrace)"
}
Write-Information "Failed to complete REST method after $maxRetries attempts."
throw
}</pre>
<h2 id="putting-it-all-together">Putting it all Together</h2>
<p>Let’s flush this out by continuing with a psuedo-example of getting YAML Build History. The initial request fetches the most recent builds with a <code>GET</code> request, and the loops using the <code>POST</code> method on the PageContribution endpoint until the continuation token is empty.</p>
<pre class="#brush:ps;toolbar:false;gutter:false;">function Get-AzurePipelineHistory
{
param(
[string]$Organization,
[string]$Project,
[string]$PipelineId,
[string]$AccessToken
)
$commonArgs = @{ Organization=$Organization; Project=$Project; AccessToken=$AccessToken }
$apiPath = "_build"
$pipelineArgs = @{
definitionId = $PipelineId
"__rt" = "fps"
"__ver" = "2.0"
}
$results = @()
$continuationToken = $null
do {
if ($null -eq $continuationToken) {
# Fetch initial result
$apiResult = Invoke-AzDevOpsRestMethod -Method "GET" @commonArgs -ApiPath $apiPath -QueryParameters $queryArgs
$pipelineRuns = $apiResult.fps.dataProviders.data."ms.vss-build-web.runs-data-provider"
} else {
$requestBody = @{
contributionIds = @("ms.vss-build-web.runs-data-provider")
dataProviderContext =
@{
properties = @{
continuationToken = $continuationToken
definitionId = $PipelineId
sourcePage = @{
routeId = "ms.vss-build-web.pipeline-details-route"
routeValues = @{
project = $ProjectName
viewname = "details"
controller = "ContributedPage"
action = "Execute"
}
}
}
}
}
# fetch additional results
$apiResult = Invoke-AzDevOpsRestMethod -Method "POST" @commonArgs -Body $requestBody
$pipelineRuns = $apiResult.dataProviders."ms.vss-build-web.runs-data-provider"
}
# hang onto the continuation token to fetch additional data
$continuationToken = $pipelineRuns.continuationToken
# append results
$results += $pipelineRuns.runs
} until ($null -eq $continuationToken)
return $results
}</pre>
<p>The end result is a concatenated collection of all the <code>runs</code> for this pipeline. It’s worth noting that if you have a very long list of pipeline runs, you’re going to have a lot of data to sift through. This can definitely be optimized and my next post will dig deeper into that.</p>
<p>That’s it for now. Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-66384390525756608342021-03-14T22:45:00.000-04:002021-03-14T22:45:14.694-04:00PowerShell Module Quick Start<p>PowerShell modules are a great way to organize related functions into reusable unit. Today's post will provide a simple walk-through that creates a bare-bones module that you can start to work with.</p> <strong>Table of Contents</strong> <ul> <li><a href="#why-create-a-module">Why create a Module?</a></li> <li><a href="#create-the-manifest-module">1. Create the Manifest Module</a></li> <li><a href="#public-vs-private-folders">2. Public vs Private Folders</a></li> <li><a href="#create-the-module-script-psm1">3. Create the Module Script (psm1)</a></li> <li><a href="#using-the-module">4. Using the Module</a></li> <li><a href="#whats-next">What's Next?</a></li> <li><a href="#wrap-up">Wrap Up</a></li> </ul> <h3 id="why-create-a-module">Why create a Module?</h3> <p>First of all, let's look at some of the motivations for creating a module - after all, you could simply move all your common utility methods into a 'helper' script and then import it into the session. Why go through the overhead of creating a module?</p> <p>Not every script or utility function needs to be a module, but if you're creating a series of scripts that rely on some common helper functions, I can think of two primary reasons why you should consider creating a module:</p> <ul> <li><strong>Pathing</strong>: when you import a script using "dot sourcing", the "." means the current working folder. If you're import individual script files, you need to specify the relative path of the file you're importing. This strategy starts to fall apart if the file you're importing references additional imports. However, when you create a module, you import all the scripts into the session so pathing no longer becomes an issue.</li> <li><strong>Discoverability</strong>: PowerShell scripts can get fairly complex so if your helper utility contains dozens of smaller methods then developers consuming your script will need to 'grok' all of those functions, too. By creating a module, you can expose only the functions you want, which will make it easier for developers to understand and use.</li> </ul> <h3 id="create-the-manifest-module">1. Create the Manifest Module</h3> <p>A module is comprised of:</p> <ul> <li><strong><module-name>.psd1</strong>: a manifest file that describes important details about the module (name, version, license, dependencies, etc)</li> <li><strong><module-name>.psm1</strong>: the main entry point when the module is imported.</li> </ul> <p><strong>Important Note:</strong> The folder and module manifest name must match!</p> <p>To simplify this process, Microsoft has provided the <code>New-ModuleManifest</code> cmdlet for us. They have <a href="https://docs.microsoft.com/en-us/powershell/scripting/developer/module/how-to-write-a-powershell-module-manifest?view=powershell-7.1">some good guidance listed here</a> on some additional settings you may want to provide, but here's the bare-bones nitty-gritty:</p> <pre class="brush:ps;gutter:false;toolbar:false;">mkdir MyModule cd MyModule New-ModuleManifest -Path .\MyModule.psd1 -RootModule MyModule.psm1</pre>
<h3 id="public-vs-private-folders">2. Public vs Private Folders</h3>
<p>When you're creating a module, it's best to think about what functions and features that you're exposing to your module consumers. This is very similar to the access-modifiers we put on classes in our .NET assemblies.</p>
<p>The less you expose, the easier it is for consumers to understand what your module does. Limiting what you expose can also protect you from accidentally introducing breaking changes to consumers - if all of your functions are public you won't know which methods that external team members might be using; keeping this list small can help you focus where version compatibility is required.</p>
<p>A good practice is to put our functions into two folders: <em>public</em> and <em>private</em>:</p>
<pre class="brush:ps;gutter:false;toolbar:false;">mkdir Private mkdir Public</pre>
<h3 id="create-the-module-script-psm1">3. Create the Module Script (psm1)</h3>
<p>The last piece is the <em>psm1</em> script. This simple script finds all the files and imports them into the session:</p>
<pre class="brush:ps;gutter:false;toolbar:false;"># MyModule.psm1
# Get Functions
$private = Get-ChildItem -Path (Join-Path $PSScriptRoot Private) -Include *.ps1 -File -Recurse
$public = Get-ChildItem -Path (Join-Path $PSScriptRoot Public) -Include *.ps1 -File -Recurse
# Dot source to scope
# load private scripts first
($private + $public) | ForEach-Object {
try {
Write-Verbose "Loading $($_.FullName)"
. $_.FullName
}
catch {
Write-Warning $_.Exception.Message
}
}
# Expose public functions. Assumes that function name and file name match
$publicFunctions = $public | Select-Object -ExpandProperty BaseName Export-ModuleMember -Function $publicFunctions
</pre>
<p>The last section of the script exposes the functions in your <em>public</em> folder as part of your module. This makes the assumption that the function name and file name are the same. An alternative to this approach is to set the Functions to export in the module manifest. I find this approach easier.</p>
<h3 id="using-the-module">4. Using the Module</h3>
<p>At this point, you're ready to start adding PowerShell scripts into your module. When you're ready to try it out, simply import the module by it's definition:</p>
<pre class="brush:ps;gutter:false;toolbar:false;">Import-Module .\MyModule.psd1 -Verbose</pre>
<p>To verify that your commands are exposed correctly:</p>
<pre class="brush:ps;gutter:false;toolbar:false;">(Get-Module MyModule).ExportedCommands</pre>
<p>During development of your module, it's important to realize that changes that are made to your scripts won't be visible until you reload the module:</p>
<pre class="brush:ps;gutter:false;toolbar:false;">Remove-Module MyModule -ErrorAction Silent Import-Module .\MyModule.psm1</pre>
<h3 id="whats-next">What's Next?</h3>
<p>There's lots more to creating a PowerShell module, such as setting the minimum supported PowerShell version, declaring dependencies, including .NET code assemblies, exposing types, etc. There are likely some additional considerations for publishing the module to a gallery, but unfortunately I'm not going to get into that for the purposes of this post. This article is helpful for creating the basic shell of a module, which you can use locally or on build servers.</p>
<h3 id="wrap-up">Wrap Up</h3>
<p>Creating a PowerShell module is a fairly quick process that helps to promote reuse between related scripts. With Microsoft's New-ModuleManifest cmdlet and the basic <em>psm1</em> file provided above, you can fast track creating a Module.</p>
<p>Happy codin'</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-10068937429679135692021-03-06T10:28:00.001-05:002021-03-06T10:40:36.575-05:00Using the Azure CLI to Call Azure DevOps REST API<p>Suppose the Azure DevOps REST API that you want to call isn't in the list of <em>az cli</em> supported commands. Does this mean your script needs to toggle between az cli and invoking REST endpoints? Fear not, there's actually a built in az devops command "<code>az devops invoke</code>" that can call any Azure DevOps REST API endpoint. </p> <p>The <code>az devops invoke</code> command is fairly easy to use, but the trick is discovering the command-line arguments you need to provide to pull it off. This post will walk you through that.</p> <p><strong>Table of Contents</strong></p> <ul> <li><a href="#obtain-a-list-of-endpoints">Obtaining a List of Available Endpoints</a></li> <li><a href="#finding-the-right-endpoint">Finding the right endpoint</a></li> <li><a href="#invoking-endpoints">Invoking endpoints</a></li> <li><a href="#adding-querystring-parameters">Adding Query-string Parameters</a></li> <li><a href="#specifying-the-API-version">Specifying the API version</a></li> <li><a href="#providing-a-JSON-body">Providing a JSON Body</a></li> <li><a href="#known-issues">Known Issues</a></li> <li><a href="#wrapping-up">Wrapping Up</a></li> </ul> <h3><a name="obtain-a-list-of-endpoints">Obtain a List of Available Endpoints</a></h3> <p>The <code>az devops invoke</code> command is neat alternative to using the REST API, but understanding what command-line arguments you'll need isn't obvious.</p> <p>Let's start by finding out which endpoints are available by calling <code>az devops invoke</code> with no arguments and pipe this to a file for reference:</p> <pre class="brush:ps; gutter: false; toolbar: false;">az devops invoke > az_devops_invoke.json </code></pre>
<p>This will take a few moments to produce. <a href="https://github.com/bryanbcook/az-devops-cli-examples/blob/master/az_devops_invoke.json">I've got a full listing of endpoints located here</a>.</p>
<p>The list of endpoints are grouped by 'Area' and have a unique 'resourceName' and 'routeTemplate'. Here's an snippet:</p>
<pre class="brush: ps; gutter: false; toolbar: false;"> ...
{
"area": "boards",
"id":
"7f9949a0-95c2-4c29-9efd-c7f73fb27a63",
"maxVersion": 5.1,
"minVersion": 5.0,
"releasedVersion": "0.0",
"resourceName": "items",
"resourceVersion": 1,
"routeTemplate": "{project}/_apis/{area}/boards/{board}/{resource}/{*id}"
},
{
"area": "build",
"id": "5a21f5d2-5642-47e4-a0bd-1356e6731bee",
"maxVersion": 6.0,
"minVersion": 2.0,
"releasedVersion": "5.1",
"resourceName": "workitems",
"resourceVersion": 2,
"routeTemplate": "{project}/_apis/{area}/builds/{buildId}/{resource}"
}
...
</pre>
<p>You can also use the JMESPath query syntax to reduce the list:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">az devops invoke --query "[?area == 'build']"</pre>
<p><em>Interesting note: If you study the <a href="https://github.com/Azure/azure-devops-cli-extension/blob/7b99d8536a9eb82ca73e690dcef4a7ff6951869f/azure-devops/azext_devops/dev/team/invoke.py">source code for the az devops cli extension,</a> you'll notice that all commands in the devops extension are using this same list as the underlying communication mechanism.</em></p>
<h3><a name="finding-the-right-endpoint">Finding the right endpoint</a></h3>
<p>Finding the desired API in the list of endpoints might take a bit of research. All of the endpoints are grouped by '<em>area</em>' and then '<em>resourceName</em>'. I find that the '<em>area</em>' keyword lines up fairly close with the API documentation, but you'll have to hunt through the endpoint list until you find the '<em>routeTemplate</em>' that matches the API you're interested in.</p>
<p>Let's use the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/build/latest/get?view=azure-devops-rest-6.0">Get Latest Build REST API</a> as an example. It's REST endpoint is defined as:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">GET https://dev.azure.com/{organization}/{project}/_apis/build/latest/{definition}?api-version=6.0-preview.1
</pre>
<p>The <em>routeTemplate</em> is parameterized such that <em>area</em> and <em>resource</em> parameters correspond to the <em>area</em> and <em>resourceName</em> in the object definition.</p>
<p>From this, we hunt through all the '<em>build</em>' endpoints until we find this matching endpoint:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">{
"area": "build",
"id": "54481611-01f4-47f3-998f-160da0f0c229",
"maxVersion": 6.0,
"minVersion": 5.0,
"releasedVersion": "0.0",
"resourceName": "latest",
"resourceVersion": 1,
"routeTemplate": "{project}/_apis/{area}/{resource}/{definition}"
}
</pre>
<h3><a name="invoking-endpoints">Invoking endpoints</a></h3>
<p>Once you've identified the endpoint from the endpoint list, next you need to map the values from the route template to the command-line.</p>
<p>The mapping between command-line arguments and the routeTemplate should be fairly obvious. The values for "<em>{area}</em>" and "<em>{resource}</em>" are picked up from their corresponding command-line arguments, and the remaining arguments must be supplied as name-value pairs with the <code>--route-parameters</code> argument.</p>
<p>Using our Get Latest Build example, "<em>{project}</em>" and "<em>{definition}</em>" are provided on the command line like this:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">az devops invoke `
--area build `
--resource latest `
--organization https://dev.azure.com/myorgname `
--route-parameters `
project="MyProject" `
definition=1234 `
--api-version=6.0-preview</pre>
<h3><a name="adding-querystring-parameters">Adding query-string parameters</a></h3>
<p>We can further extend this example by specifying query string parameters using the <code>--query-parameters</code> argument. In this example, we can get the latest build for a specific branch by specifying the <em>branchName</em> parameter:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">az devops invoke `
--area build --resource latest `
--organization https://dev.azure.com/myorgname `
--route-parameters project="MyProject" defintion=1234 `
--query-parameters `
branchName=develop `
--api-version=6.0-preview
</pre>
<p>Note that while the CLI will validate route-parameters, it does not complain if you specify a query-string parameter that is misspelled or not supported.</p>
<h3><a name="specifying-the-API-version">Specifying the API Version</a></h3>
<p>One of the challenges is knowing which API version to use. Frankly, I've had the most luck by specifying the latest version (eg 6.0-preview). As a general rule, the <em>releasedVersion</em> in the endpoint list should indicate which version to use, which is constrained by the '<code>maxVersion</code>'. </p>
<p>If the <em>releaseVersion</em> is set to <code>"0.0"</code>, then the preview flag is required.</p>
<h3><a name="providing-a-JSON-body">Providing a JSON Body</a></h3>
<p>To provide a JSON body for PUT and POST requests, you'll need to provide a JSON file using the <code>--in-file</code> and <code>--httpMethod</code> parameters.</p>
<p>For example, cancelling a build:</p>
<pre class="brush: ps; gutter: false; toolbar: false;"># Write the JSON body to disk
'{ "status": "Cancelling"}' | Out-File -FilePath .\body.json
# PATCH the build with a cancelling status
az devops invoke `
--area build `
--resource builds `
--organization https://dev.azure.com/myorgname `
--route-parameters `
project="MyProject" `
buildId=2345 `
--in-file .\body.json `
--http-method patch
</pre>
<h3><a name="known-issues">Known Issues</a></h3>
<p>By design, you would assume that the <em>area</em> and <em>resourceNames</em> in the list of endpoints are intended to be unique, but unfortunately this isn't the case. Again, referring to the <a href="https://github.com/Azure/azure-devops-cli-extension/blob/7b99d8536a9eb82ca73e690dcef4a7ff6951869f/azure-devops/azext_devops/dev/team/invoke.py">source code of the extension, when trying to locate the endpoints by area + resource</a> it appears to be a <a href="https://en.wikipedia.org/wiki/First-past-the-post_voting">first-past-the-post</a> scenario where only the first closest match is considered.</p>
<p>In this scenario, it would be helpful <a href="https://github.com/Azure/azure-devops-cli-extension/issues/1012">if we could specify the endpoint id</a> from the command-line but this isn't supported yet.</p>
<p>To see the duplicates (it's not a small list):</p>
<pre class="brush: ps; gutter: false; toolbar: false;">$endpoints = az devops invoke | Select-Object -skip 1 | ConvertFrom-Json
$endpoints | Group-Object -Property area,resourceName | Where-Object { $_.Count -gt 1 }
</pre>
<p>The important thing to realize is that this list isn't unique to the az devops extension, it's actually a global list which is exposed from Azure DevOps. Perhaps how this list is obtained is something I'll blog about later.</p>
<h3><a name="wrapping-up">Wrapping Up</a></h3>
<p>While there are still somethings that are easier to do using the REST API, the Azure DevOps CLI offers a built-in capability to invoke the majority of the underlying APIs, though the biggest challenge is finding the right endpoint to use.</p>
<p>My personal preference is to start with the Azure DevOps CLI because I can jump in and start developing without having to worry about authentication headers, etc. I can also combine the results JMESPath filtering.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-31382894059503653142020-08-27T07:57:00.000-04:002020-08-27T07:57:00.697-04:00Quick Start with the JIRA REST API Browser<p><a title="Milky Way over Guilderton Lighthouse, Western Australia - 35mm Panorama" href="https://www.flickr.com/photos/trevor_dobson_inefekt69/25626842713/in/photolist-F3ygvK-Jbxe3t-27ZFVgj-HqWTyG-c9r9aq-NN5d4J-RdCcza-PBdt4K-2jvTywk-Ryodcn-PnMd8P-2hQfgH1-2j2QYhX-2j8nD2L-2jqgiGV-PsQgV6-QCGsGh-MvCqva-Kr1MfR-S8xpTv-2jevDjP-XgYNvY-FhW5mC-HznHm3-2iWgNfy-rXmscs-StceYj-v7T3UT-35DFqw-22fQXvY-RMvvn3-2hCvWwR-MBQuvn-xEPXtu-F3aKRT-b9RaHF-MvTxfQ-KoAyoC-yyYyNF-RNMUEe-GsnVSa-27Dixxq-XKo15F-NbVNqM-URdHSp-2gNGKYt-FGQkZC-2iFUkeb-ndxh37-2fGDQBv" data-flickr-embed="true"><img alt="Milky Way over Guilderton Lighthouse, Western Australia - 35mm Panorama" src="https://live.staticflickr.com/1538/25626842713_dd1efab860_c.jpg" width="800" height="399" /></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script></p> <p>Looking to start programming against the JIRA API? Me too! This post walks through setting up JIRA locally on your computer and installing the JIRA API Explorer to start poking at the APIs.</p> <h1>Why use the REST API Browser?</h1> <p>Good question. Postman and other tools can poke at JIRA endpoints just as well, but the API Explorer comes equipped with help documentation and a lets you quickly pick out an endpoint without having to worry about authentication, etc. You can easily toggle between GET, POST, PUT, DELETE commands for endpoints that support it, and the explorer provides a simple form entering required and optional fields. </p> <p><a href="https://drive.google.com/uc?id=1tED4M5hHwbcVyeFS2CWZrN8V11U1FLQO"><img title="jira-explorer" style="display: inline; background-image: none;" border="0" alt="jira-explorer" src="https://drive.google.com/uc?id=1ym1C89NBiorI-GlYEJB3gIcPzCbQEdoR" width="644" height="314" /></a></p> <h1>Setup a JIRA instance in Docker</h1> <p>Setting up JIRA in a docker container is pretty easy.</p> <pre><code class="language-shell" lang="shell">docker volume create --name jiraVolume
docker run -v jiraVolume:/var/atlassian/application-data/jira --name="jira" -d -p 8080:8080 atlassian/jira-software
</code></pre>
<p>After the docker image is downloaded, open your browser to <a class="url" href="http://localhost:8080" target="_blank">http://localhost:8080</a></p>
<p>On first start, you will be prompted with a series of prompts:</p>
<ul>
<li>Set it up for me</li>
<li>Create a trial license key</li>
<li>Setup an admin user account</li>
<li>Create an empty project, use a sample project or import from an existing project.</li>
</ul>
<h2>Quick Setup</h2>
<p>When you first navigate to the docker instance, you'll be prompted with a choice:</p>
<p><a href="https://drive.google.com/uc?id=1UyzJgl-GegQDELhVIqAHyXE53Nk-nkzf"><img title="jira-setup-1" style="display: inline; background-image: none;" border="0" alt="jira-setup-1" src="https://drive.google.com/uc?id=1OlUHFTcZKuM5K3yX_fABBBsIsCy_vJh3" width="644" height="455" /></a></p>
<p>Let's go with "Set it up for me", and then click on "Continue to MyAtlassian"</p>
<h2>Create a Trial License</h2>
<p>You'll be redirected to provide information for creating a trial license. Select "JIRA Server", provide a name for your organization and leave the default values for the server name.</p>
<p><a href="https://drive.google.com/uc?id=15H3SGf9MtROuolF8FtgPVCkfk314jewC"><img title="jira-setup-2" style="display: inline; background-image: none;" border="0" alt="jira-setup-2" src="https://drive.google.com/uc?id=1v41eM9uxgzbMaBShuZ2tAQt-A0EYxLQm" width="548" height="484" /></a></p>
<p>When you click next, you'll be redirected to a screen that shows the license key and a <strong>Confirmation</strong> dialog.</p>
<p><a href="https://drive.google.com/uc?id=1pDNwBv0QKvGrO4EUYIkVJ0edU0wMyQWC"><img title="jira-setup-3" style="display: inline; background-image: none;" border="0" alt="jira-setup-3" src="https://drive.google.com/uc?id=1M6Va6TyTfY6czl5Jcvs9a4ON6LqxaayQ" width="499" height="211" /></a></p>
<p>Click <em>Yes</em> to confirm the installation.</p>
<h2>Create an Admin User Account and default settings</h2>
<p>Somewhat self-explanatory, provide the details for your admin account and click <em>Next</em>.</p>
<p><a href="https://drive.google.com/uc?id=1dZTUgztGL3WnwYuxc0a9uuvnsdcGIPfV"><img title="jira-setup-4" style="display: inline; background-image: none;" border="0" alt="jira-setup-4" src="https://drive.google.com/uc?id=1zNXhE1CB4QjjogFtV7L8tnX5GdTXdHVP" width="644" height="343" /></a></p>
<p>You should see a setup page run for a few minutes. When this completes, you'll need to:</p>
<p><a href="https://drive.google.com/uc?id=1H3iKBuTcR1ufIr3FOjEHCbyczVx8hrhf"><img title="jira-setup-5" style="display: inline; background-image: none;" border="0" alt="jira-setup-5" src="https://drive.google.com/uc?id=1MoJNvYQUrSkPR9jSh8GLmX9-79-CNxBD" width="644" height="422" /></a></p>
<ul>
<li>Login with the admin account you created</li>
<li>Pick your default language</li>
<li>Choose an Avatar</li>
</ul>
<h2>Create your First Project</h2>
<p>The last step of the Quick Setup is to create your first project. Here you can create an empty project, import from an existing project or use a sample project. As we're interested in playing with the API, pick the "See it in action" option to create a JIRA project with a prepopulated backlog.</p>
<p><a href="https://drive.google.com/uc?id=1IYb03qvTR-1_wuEUd3Z13D7Y97uBvzWw"><img title="jira-setup-7" style="display: inline; background-image: none;" border="0" alt="jira-setup-7" src="https://drive.google.com/uc?id=11BnODAfedodtmrfAvMz9tUmJ2F7h49cr" width="1018" height="259" /></a></p>
<p>Selecting "See it in action", prompts for the style of project and the project details.</p>
<p><a href="https://drive.google.com/uc?id=1oRszGct7SUwtQ7jDCZt0-9w-XU6AjHX-"><img title="jira-setup-8" style="display: inline; background-image: none;" border="0" alt="jira-setup-8" src="https://drive.google.com/uc?id=1mea_U8oeMSu04Tgffvwa21jGJAdGQHAI" width="682" height="319" /></a></p>
<p>Great job. That was easy.</p>
<h1>Install the API Browser</h1>
<p>The API Explorer is available through the Atlassian Marketplace. To install the add-on:</p>
<ol>
<li>In the top right, select Settings <strong>⟶</strong> Applications</li>
<li>Click the "Manage Apps" tab</li>
<li>Search for "API Browser"
<br />
<br /><a href="https://drive.google.com/uc?id=1wASzYBqKvFHPnB-M5NXCsgTeetsrAwyb"><img title="manage-apps-1" style="display: inline; background-image: none;" border="0" alt="manage-apps-1" src="https://drive.google.com/uc?id=1jqHooBM9Xg9ujGC5Kp1hjlE-2O9KvqZ5" width="644" height="394" /></a>
<br /></li>
<li>Click "Install"</li>
</ol>
<p>Super easy.</p>
<h1>Using the REST API Browser</h1>
<p>The hardest part of using the REST API Browser is locating it after it's been installed. Fortunately, that's easy, too. </p>
<p>In the top-right, select Settings <strong>⟶</strong> System and scroll all the way to the bottom to Advanced: REST API Browser</p>
<p><a href="https://drive.google.com/uc?id=1SDHIWhWGc4Vwud-Of9V7M7os5XZmYJS3"><img title="manage-apps-2" style="display: inline; background-image: none;" border="0" alt="manage-apps-2" src="https://drive.google.com/uc?id=10yHYsGlLVkAl0fHD-iUuch0JoNzZwRXG" width="198" height="314" /></a></p>
<h2>Try it out!</h2>
<p>The REST API Browser is fairly intuitive, simply find the endpoint you're interested in on the right hand side and provide the necessary fields in the form.</p>
<p><a href="https://drive.google.com/uc?id=1wRNXkC_NaqeiZQGAWUAffaqVTrHi4RAx"><img title="rest-api-browser-1" style="display: inline; background-image: none;" border="0" alt="rest-api-browser-1" src="https://drive.google.com/uc?id=1T021ify6C1KQ2SXtJ_RKhKaf3hi6D2wV" width="644" height="415" /></a></p>
<h2>Mind your verbs...</h2>
<p>Pay special attention to the HTTP verbs being used: GET, POST, PUT, DELETE. Most of JIRA's endpoints differ only by the verb, items that are POST + PUT will either create or update records and not all endpoints will have a GET.</p>
<p><a href="https://drive.google.com/uc?id=1sC0-tHTdDwDQl_kCU_VuzhhXGdWQIgl8"><img title="rest-api-browser-2" style="display: inline; background-image: none;" border="0" alt="rest-api-browser-2" src="https://drive.google.com/uc?id=1V_QbPH_kmT5-3yRlTlILdAlGQmIySt0j" width="459" height="146" /></a></p>
<h1>Wrap up</h1>
<p>Now you have now excuse to start picking at the JIRA API.</p>
<p>Side note: I wrote this post using <a href="https://typora.io/">Typora</a>, a super elegant markdown editor. I'm still experimenting on how to best integrate with my blog platform (Blogger), but I might look at some form of combination of static markdown.md files with a command-line option to publish. I will most likely post that setup when I get there.</p>
<p>Until then, happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-59997129873617375682020-08-18T09:24:00.000-04:002020-08-18T09:24:00.187-04:00Cleaning up stale git branches<a title="Broken branches" href="https://www.flickr.com/photos/astrid/10604866893/in/photolist-ha7HsD-23VM5kq-2g9qkBx-V93xNB-2jrn6na-2huXrq7-XPYyLY-26y8TSx-MEFWi8-GpXmJA-2huX2eM-JRWEjD-2fFhc1N-2jmbe6o-EeJH9D-4BV8DU-2jnhxG9-2gwAuHG-2hpwZSL-7tE3RQ-2br9vhT-2i1Ehq6-2hUu1AW-MFN9sP-FxMJuU-mPepg-ZeFMvS-2hv1G3B-ajnFjy-bfUJ6X-TtR1Vq-Pbfyb6-E8h6mR-4sxvJc-2b9HS8B-26Dwwdk-KiEQbq-23mS14c-uJxgw-bst6gg-75Jqse-NvwV3K-2jjgBYW-2a2KFLN-2fDC33i-6mMU9f-JBWCZx-Ga5VSr-2iysHrG-DkeCmK" data-flickr-embed="true"><img alt="Broken branches" src="https://live.staticflickr.com/3701/10604866893_4209dc7941_c.jpg" width="800" height="546" /></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <p>If you have pull-requests, you likely have stale branches. What’s the best way to find which branches can be safely deleted? This post will explore some approaches to find and delete stale branches.</p> <p>As there are many different ways we can approach this, I’m going to start with the most generic concepts and build up to a more programmatic solution. I want to use the Azure DevOps CLI and REST APIs for this instead of git-centric commands because I want to be able to run the scripts from any computer against the latest version of the repository. This also opens up the possibility of running these activities in a PowerShell script in an Azure Pipeline, as <a href="http://www.bryancook.net/2020/08/running-azure-devops-cli-from-azure.html" target="_blank">outlined in my previous post</a>.</p> <p><strong>Table of Contents</strong></p> <ul> <li><a href="#deleting-branches">Who can Delete Branches?</a></li> <li><a href="#finding-authors">Finding the Author of a Branch</a></li> <li><a href="#finding-contributors">Finding the Last Contributor to a Branch</a></li> <li><a href="#check-the-expiry">Check the Expiry Date</a></li> <li><a href="#cross-referencing-prs">Finding branches that have completed Pull-Requests</a></li> <li><a href="#wrap-up">Wrapping up</a></li> </ul> <h2><a name="#deleting-branches">Who can delete Branches?</a></h2> <p>One of the reasons branches don’t get deleted might be a permissions problem. In Azure DevOps, the default permission settings set up the creator of the branch with the permission to delete it. If you don’t have permission, the option isn’t available to you in the user-interface, and this creates a missed opportunity to remediate the issue when someone other than the author completes the PR.</p> <p><a href="https://drive.google.com/uc?id=10tgXlt6yErcneWUu1UEgpoUAe4e3NOV4"><img title="pr-complete-merge-disabled-delete" style="display: inline; background-image: none;" border="0" alt="pr-complete-merge-disabled-delete" src="https://drive.google.com/uc?id=11JfKaTtFheSs1L-WKj9M0BaJhwRQFQWK" width="426" height="484" /></a></p> <p><a href="https://drive.google.com/uc?id=1htOxlaXH5jb3earMLRh5YAAu8KYKlWKn"><img title="pr-delete-source-branch" style="display: inline; background-image: none;" border="0" alt="pr-delete-source-branch" src="https://drive.google.com/uc?id=11W3oD-r_-kxfkZVbHZRn9JxnalcL68WZ" width="644" height="330" /></a></p> <p>You can change the default setting by adding the appropriate user or group to the repository’s permissions and the existing branches will inherit. You’ll need the “<em>Force push (rewrite history, delete branches and tags)” </em>permission to delete the branch.  See my <a href="http://www.bryancook.net/2020/08/securing-git-branches-through-azure.html" target="_blank">last post on ways to apply this policy programmatically</a>.</p> <p><a href="https://drive.google.com/uc?id=1kjQQ8JnD6o0lAZMQ8AHcqcvD5UfhspN_"><img title="branch-permission-forcepush" style="display: inline; background-image: none;" border="0" alt="branch-permission-forcepush" src="https://drive.google.com/uc?id=17ZyPWksS6C0MIlJEk9BMw_U_ypNqWrGH" width="644" height="424" /></a></p> <p>If we want to run this from a pipeline, we would have to grant the Build Service the same permissions.</p> <h2><a name="finding-authors">Finding the Author of a Branch</a></h2> <p>One approach to cleaning-up stale branches is the old fashion way: nagging. Simply track down the branch authors and ask them to determine if they’re done with them.</p> <p>We can find the authors for the branches with the following <em>PowerShell + Az DevOps CLI:</em></p> <pre class="brush: ps; gutter: false;">$project = "<project-name>"
$repository = "<repository-name>"
$refs = az repos ref list `<br /> --query "[?starts_with(name, 'refs/heads')].{name:name, uniqueName:creator.uniqueName}" `
--project $project --repository $repository |
ConvertFrom-Json
$refs | Sort-Object -Property uniqueName
</pre>
<p><a href="https://drive.google.com/uc?id=1tUjW0G9PtW7sY82j6gBjkePyiPEvLDYW"><img title="az-repos-ref-list-example1" style="display: inline; background-image: none;" border="0" alt="az-repos-ref-list-example1" src="https://drive.google.com/uc?id=1Bapqex2f9iKyq2eUuCY7tI8PfbKoTqGO" width="644" height="94" /></a></p>
<h2><a name="#finding-contributors">Finding the Last Contributor to a Branch</a></h2>
<p>Sometimes, the author of the branch isn’t the person doing the work. If this is the case, you need to track down the last person to commit against the branch. This information is available in the Azure DevOps user interface (Repos –> Branches):</p>
<p><a href="https://drive.google.com/uc?id=1in8L9XEXPar2FKXO1xft_y91NP-pfchS"><img title="branch-authors" style="display: inline; background-image: none;" border="0" alt="branch-authors" src="https://drive.google.com/uc?id=1S1SlliVfkIoJhnelliSmapW1-c0Icu2s" width="644" height="181" /></a></p>
<p>If you want to obtain this information programmatically, <i>az repos list ref</i> provides us with the <em>objectId</em> SHA-1 of the most recent commit. Although <i>az repos</i> doesn’t expose a command to retrieve the commit details, we can use the <i>az devops invoke</i> command to call the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/git/commits/get?view=azure-devops-rest-5.1" target="_blank">Get Commit</a> REST endpoint. </p>
<p>When we fetch the detailed information on the branch, we want to get the author that created the commit and the date that they pushed it to the server. We want the push details because a developer may have made the commit a long time ago but only recently updated the branch.</p>
<pre class="brush: ps; gutter: false;">$project = "<project-name>"
$repository = "<repository-name>"
$refs = az repos ref list –p $project –r $repository --filter heads | ConvertFrom-Json
$results = @()
foreach($ref in $refs) {<br /><br /> $objectId = $ref.objectId
<br /> # fetch individual commit details
$commit = az devops invoke `<br /> --area git `<br /> --resource commits `
--route-parameters `
project=$project `
repositoryId=$repository `
commitId=$objectId |<br /> ConvertFrom-Json
$result = [PSCustomObject]@{
name = $ref.name
creator = $ref.creator.uniqueName
lastAuthor = $commit.committer.email<br /> lastModified = $commit.push.date
}
$results += ,$result
}
$results | Sort-Object -Property lastAuthor
</pre>
<p><a href="https://drive.google.com/uc?id=1tyOr0ZZ2mVM55shhG4DOeTIitHtJ_v8w"><img title="az-repos-ref-list-example" style="display: inline; background-image: none;" border="0" alt="az-repos-ref-list-example" src="https://drive.google.com/uc?id=1gjWHnIj0qVHp9UYrTjylRTB2cWV27yg9" width="644" height="103" /></a></p>
<p>This gives us a lot of details for the last commit in each branch, but if you’ve got a lot of branches, fetching each commit individually could be really slow. So, instead we can use the same <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/git/commits/get?view=azure-devops-rest-5.1" target="_blank">Get Commit endpoint</a> to fetch the commit information in batches by providing a collection of objectIds in a comma-delimited format.</p>
<p>Note that there’s a limit to how many commits we can ask for at a time, so I’ll have to batch my batches. I could also use the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/git/commits/get%20commits%20batch?view=azure-devops-rest-5.1" target="_blank">Get Commits Batch endpoint</a> that accepts the list of ids in the body of a POST message. </p>
<p>The following shows me batching 50 commits at a time. Your batch size may vary if the name of your server or <em>organization</em> name is a longer length:</p>
<pre class="brush: ps; gutter: false;">$batchSize = 50
$batches = [Math]::Ceiling($refs.Length / $batchSize)
for( $x=0; $x -lt $batches; $x++ )
{<br /> # take a batch
$batch = $refs | Select-Object -First $batchSize -Skip ($x * $batchSize)<br /><br /> # grab the ids for the batch
$ids = ($batch | ForEach-Object {$_.objectId}) -join ','<br /><br /> # ask for the commit details for these items
$commits = az devops invoke --area git --resource commits `
--route-parameters `
project=$project `<br /> repositoryId=$repository `
--query-parameters `
searchCriteria.ids=$ids `
searchCriteria.includePushData=true `
--query "value[]" | ConvertFrom-Json
<br /> # loop through this batch of commits
for($i=0; $i -lt $commits.Length; $i++) {<br />
$ref = $refs[($x*$batchSize)+$i]
$commit = $commits[$i]<br /><br /> # add commit information to the batch
$ref | Add-Member -Name "author" -Value $commit.author.email -MemberType NoteProperty
$ref | Add-Member -Name "lastModified" -Value $commit.push.date -MemberType NoteProperty <br /><br /> # add the creator’s email on here for easier access in the select-object statement...<br /> $ref | Add-Member –Name "uniqueName” –Value $ref.creator.uniqueName –MemberType NoteProperty
}
}
$refs | Select-Object -Property name,creator,author,lastModified
</pre>
<p><strong>Caveat about this approach:</strong> If you've updated the source branch by merging from the target branch, the last author will be from that target branch – which isn’t what we want. Even worse, there's no way to infer this scenario from the git commit details alone. One way we can solve this problem is to fetch the commits in the branch and walk up the parents of the commit until we find a commit that has more than one parent – this would be our merge commit from the target branch, which should have been done by the last author on the branch. Note that the parent information is only available if you query these items one-by-one, so this approach could be painfully slow. (If you know a better approach, let me know)</p>
<h2><a name="#check-the-expiry">Check the Expiry</a></h2>
<p>Now that we have date information associated to our branches, we can start to filter out the branches that <em>should</em> be considered stale. In my opinion anything that’s older than 3 weeks is a good starting point. </p>
<pre class="brush: ps; gutter: false;">$date = [DateTime]::Today.AddDays( -21 )
$refs = $refs | Where-Object { $_.lastModified -lt $date }
</pre>
<p>Your kilometrage will obviously vary based on the volume of work in your repository, but on a recent project 10-20% of the branches were created recently.</p>
<h2><a name="#cross-referencing-prs">Finding Branches that have Completed Pull-Requests</a></h2>
<p>If you’re squashing your commits when your merge, you’ll find that the ahead / behind feature in the Azure DevOps UI is completely unreliable. This is because a squash merge re-writes history, so your commits in the branch will never appear in the target-branch at all. Microsoft <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/merging-with-squash?view=azure-devops#considerations-when-squash-merging" target="_blank">recommends deleting the source branch when using this strategy</a> as there is little value in keeping these branches around after the PR is completed. Teams may argue that they want to cherry-pick individual commits from the source-branch, but the practicality of that requires pristine </p>
<p>Our best bet to find stale branches is to look at the Pull-Request history and consider all branches that are associated to <em>completed</em> Pull-Requests as candidates for deletion. This is <a href="https://www.youtube.com/watch?v=OEaSnmf1JwY" target="_blank">super easy, barely an inconvenience</a>.</p>
<pre class="brush: ps; gutter: false;">$prs = az repos pr list `
--project $project `
--repository $repository `
--target-branch develop `
--status completed `<br /> --query "[].sourceRefName" |
ConvertFrom-Json |
$refs |
Where-Object { $prs.Contains( $_.name ) } |
ForEach-Object {
$result = az repos ref delete `
--name $_.name `
--object-id $_.id `
--project $project `
--repository $repository |
ConvertFrom-Json<br /> Write-Host ("Success Message: {0}" –f $result.updateStatus)
}
</pre>
<p>At first glance, this would remove about 50% of the remaining branches in our repository, leaving us with 10-20% recent branches and an additional 30-40% of branches without PRs. This is roughly a 40% reduction, and I’ll take that for now. It’s important to recognize this only includes the <em>completed</em> PRs, not the <em>active</em> or <em>abandoned.</em></p>
<h2><a name="#wrap-up">Wrapping Up</a></h2>
<p>Using a combination of these techniques we could easily reduce the amount of stale branches, and then provide the remaining list to the team to have them clean-up the dredges. The majority of old branches are likely abandoned work, but there's sometimes scenarios where partially completed work is waiting on some external dependency. In that scenario, encourage the team to keep these important branches up-to-date to retain the value of the invested effort.</p>
<p>The best overall strategy is to adopt a strategy that does not let this situation occur: give the individuals reviewing and completing PRs the permission to delete branches and encourage teams to squash and delete branches as they complete PRs. Good habits create good hygiene.</p>
<p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-85668340936614703252020-08-14T08:12:00.000-04:002020-08-15T08:51:35.737-04:00Securing Git Branches through Azure DevOps CLI<a title="Permission Granted" href="https://www.flickr.com/photos/thomashawk/46561099862/in/photolist-2dWrNPQ-9zgtps-HDgPVh-2ibrbqv-b4qCRa-6A95eZ-DGdhuP-5ufPSb-dX75zt-9MdiCX-2fXrzJD-mcp9Mn-7yDGEm-6gwTX8-24uAB2J-5vnpP3-6A96vc-7ELrcT-ct4tVy-QNCo7o-5YPsUe-2dwzt4C-6A96u8-dc4G54-93R4Hd-CwyVRu-afK1PM-ckQic7-p6oiMo-2gZZa9i-5zuvPC-6A96w8-4tvqU-7VLApz-wkhem-6A96t4-cWEZ4q-6AdeeW-QoS2N6-2jq3aiL-8gUowU-hwP9K8-v6xZvG-cHwbRL-2hTA2k6-NKEBwS-iVbwJV-6SvqLj-6A96rr-5G9s2n" data-flickr-embed="true"><img alt="Permission Granted" src="https://live.staticflickr.com/7906/46561099862_c543251da7_c.jpg" width="800" height="533" /></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <p>I've been looking for an way to automate branch security in order to enforce branch naming conventions and to control who can create release branches. Although the <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/require-branch-folders?view=azure-devops&tabs=browser" target="_blank">Azure DevOps documentation</a> illustrates how to do this using the <em>tfssecurity.exe</em> command, the documentation also suggests that the<em> <a href="https://docs.microsoft.com/en-us/azure/devops/server/command-line/tfssecurity-cmd?view=azure-devops-2020&viewFallbackFrom=azure-devops" target="_blank">tfssecurity.exe command is now deprecated</a></em>.</p> <p>This post will walk through how to apply branch security to your Azure DevOps repository using the Azure DevOps CLI.</p> <b>Table of Contents</b> <ul> <li><a href="#understanding-ado-security">Understanding Azure DevOps Security</a></li> <li><a href="#security-tokens">Security Tokens for Repositories and Branches</a></li> <ul> <li><a href="#convert-to-hex-format">Converting Branch Names to git hex format</a></li> <li><a href="#convert-from-hex-format">Obtaining Branch Name from Git Hex Format</a></li> </ul> <li><a href="#fetching-group-descriptors">Fetching Group Descriptors</a></li> <li><a href="#apply-branch-security">Apply Branch Security</a></li> <li><a href="#putting-it-together">Putting it all together</a></li> </ul> <h2><a name="understanding-ado-security">Understanding Azure DevOps Security</a></h2> <p>The Azure DevOps Security API is quite interesting as security can be applied to various areas of the platform, including permissions for the project, build pipeline, service-connection, git repositories, etc. Each of these areas support the ability to assign permissions for groups or individuals to a security token. In some cases these tokens are hierarchical, so changes made at the root are inherited on children nodes. The areas that define the permissions are defined as Security Namespaces, and each token has a Security Access Control List that contains Access Control Entries.</p> <p>We can obtain a complete list of security namespaces by querying <u>https://dev.azure.com/<organization>/_apis/securitynamespaces</u>, or by querying them using the az devops cli: </p> <pre class="brush: bash; gutter: false;">az devops security permission namespace list</pre>
<p>Each security namespace contains a list of actions, which are defined as bit flags. The following shows the "Git Repositories" security namespace:</p>
<pre class="brush: bash; gutter: false;">az devops security permission namespace list `
--query "[?contains(name,'Git Repositories')] | [0]"
</pre>
<pre class="brush: bash; gutter: false; collapse: true;">{
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87",
"name": "Git Repositories",
"displayName": "Git Repositories",
"separatorValue": "/",
"elementLength": -1,
"writePermission": 8192,
"readPermission": 2,
"dataspaceCategory": "Git",
"actions": [
{
"bit": 1,
"name": "Administer",
"displayName": "Administer",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 2,
"name": "GenericRead",
"displayName": "Read",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 4,
"name": "GenericContribute",
"displayName": "Contribute",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 8,
"name": "ForcePush",
"displayName": "Force push (rewrite history, delete branches and tags)",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 16,
"name": "CreateBranch",
"displayName": "Create branch",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 32,
"name": "CreateTag",
"displayName": "Create tag",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 64,
"name": "ManageNote",
"displayName": "Manage notes",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 128,
"name": "PolicyExempt",
"displayName": "Bypass policies when pushing",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 256,
"name": "CreateRepository",
"displayName": "Create repository",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 512,
"name": "DeleteRepository",
"displayName": "Delete repository",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 1024,
"name": "RenameRepository",
"displayName": "Rename repository",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 2048,
"name": "EditPolicies",
"displayName": "Edit policies",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 4096,
"name": "RemoveOthersLocks",
"displayName": "Remove others' locks",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 8192,
"name": "ManagePermissions",
"displayName": "Manage permissions",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 16384,
"name": "PullRequestContribute",
"displayName": "Contribute to pull requests",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
},
{
"bit": 32768,
"name": "PullRequestBypassPolicy",
"displayName": "Bypass policies when completing pull requests",
"namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
}
],
"structureValue": 1,
"extensionType": "Microsoft.TeamFoundation.Git.Server.Plugins.GitSecurityNamespaceExtension",
"isRemotable": true,
"useTokenTranslator": true,
"systemBitMask": 0
}
</pre>
<h2><a name="security-tokens">Security Tokens for Repositories and Branches</a></h2>
<p>The tokens within the Git Repositories Security Namespace follow the naming convention <em>repoV2/<projectId>/<repositoryId>/<branch></em><repositoryid><projectid><branch>. As this is a hierarchical security namespace, you can target very specific and granular permissions by adding parameters from left to right.</branch></projectid></repositoryid><projectid></projectid></p>
<p>Examples:</p>
<ul>
<li> All repositories within the organization: <em>repoV2/ </em></li>
<li>All repositories within a project: <em>repoV2/<projectId></em></li>
<em></em>
<li>All branches within a repository: <em>repoV2/<projectId>/<repositoryId></em></li>
<li>Specific branch: <em>repoV2/<projectId>/<repositoryId>/<branch></em></li>
</ul>
<p>As the tokens are hierarchial, a really cool feature is that we can define patterns for branches that do not exist yet.</p>
<p>While the project and repository elements are relatively self-explanatory, the git branch convention is case sensitive and expressed in a hex format. Both <a href="https://jessehouwing.net/azure-devops-git-setting-default-repository-permissions/" target="_blank">Jesse Houwing</a> and the <a href="https://devblogs.microsoft.com/devops/git-repo-tokens-for-the-security-service/" target="_blank">Azure DevOps blog have some good write-ups</a> on understanding this format. </p>
<h3><a name="convert-to-hex-format">Converting Branch Names to git hex format</a></h3>
<p>The following PowerShell script can produce a security token for your project, repository or branch.</p>
<pre class="brush: ps; gutter: false;">function Get-RepoSecurityToken( [string]$projectId, [string]$repositoryId, [string]$branchName) {
$builder = "repoV2/"
if ( ![string]::IsNullOrEmpty($projectId) ) {
$builder += $projectId
$builder += "/"
if ( ![string]::IsNullOrEmpty($repositoryId) ) {
$builder += $repositoryId
$builder += "/"
if ( ![string]::IsNullOrEmpty( $branchName) ) {
$builder += "refs/heads/"
# remove extra values if provided
if ( $branchName.StartsWith("/refs/heads/") ) {
$branchName = $branchName.Replace("/refs/heads/", "")
}
if ( $branchName.EndsWith("/")) {
$branchName = $branchName.TrimEnd("/")
}
$builder += (($branchName.Split('/')) | ForEach-Object { ConvertTo-HexFormat $_ }) -join '/'
}
}
}
return $builder
}
function ConvertTo-HexFormat([string]$branchName) {
return ($branchName | Format-Hex -Encoding Unicode | Select-Object -Expand Bytes | ForEach-Object { '{0:x2}' -f $_ }) -join ''
}
</pre>
<h3><a name="convert-from-hex-format">Obtaining Branch Name from Git Hex Format</a></h3>
<p>If you're working with query results and would like to see the actual name of the branches you've assigned, this function can reverse the hex format into a human readable string.</p>
<pre class="brush: ps; gutter: false;">function ConvertFrom-GitSecurityToken([string]$token) {
$refHeads = "/refs/heads/"
$normalized = $token
if ($token.Contains($refHeads)) {
$indexOf = $token.IndexOf($refHeads) + $refHeads.Length
$firstHalf = $token.Substring(0, $indexOf)
$secondHalf = $token.Substring($indexOf)
$normalized = $firstHalf
$normalized += (($secondHalf.Split('/')) | ForEach-Object { ConvertFrom-HexFormat $_ }) -join '/'
}
return $normalized
}
function ConvertFrom-HexFormat([string]$hexString) {
$bytes = [byte[]]::new($hexString.Length/2)
for($i = 0; $i -lt $hexString.Length; $i += 2) {
$bytes[$i/2] = [convert]::ToByte($hexString.Substring($i,2), 16)
}
return [Text.Encoding]::Unicode.GetString($bytes)
}
</pre>
<h2><a name="fetching-group-descriptors">Fetching Group Descriptors</a></h2>
<p>Although it's incredibly easy to fetch security permissions for individual users, obtaining the permissions for user groups requires a special descriptor. To make them easier to work with, I'll grab all the security groups in the organization and map them into a simple lookup table:</p>
<pre class="brush: ps; gutter: false;">function Get-SecurityDescriptors() {
$lookup = @{}
$decriptors = az devops security group list --scope organization --query "graphGroups[]" | ConvertFrom-Json
$descriptors | ForEach-Object { $lookup.Add( $_.principalName, $_.descriptor) }
return $descriptors
}
</pre>
<h2><a name="apply-branch-security">Apply Branch Security</a></h2>
<p>Given that we'll call the function several times, we'll wrap it in a method to make it easier to use.</p>
<pre class="brush: ps; gutter: false;">$namespaceId = "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
function Set-BranchSecurity( $descriptor, $token, $allow = 0 , $deny = 0) {
$result = az devops security permission update `
--id $namespaceId `
--subject $descriptor `
--token $token `
--allow-bit $allow `
--deny-bit $deny `
--only-show-errors | ConvertFrom-Json
}
</pre>
<h2><a name="#putting-it-together">Putting it All Together</a></h2>
<p>This crude example shows how to apply the guidance laid out in the <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/require-branch-folders?view=azure-devops&tabs=browser" target="_blank">Require branches to be created in folders</a> article, to a specific repository:</p>
<pre class="brush: ps; gutter: false;">$project = "projectName"
$projectId = az devops project list --query "value[?name=='$project'].id | [0]" | ConvertFrom-Json
# grab the first repo in the project
$repoId = az repos list --project $project --query [0].id | ConvertFrom-Json
$groups = Get-SecurityDescriptors
$administrators = $groups[ "[$project]\\Project Administrators" ]
$contributors = $groups[ "[$project]\\Contributors" ]
# a simple array of tokens we can refer to
$tokens = @(
Get-RepoSecurityToken( $projectId, $repoId ), # repo - 0
Get-RepoSecurityToken( $projectId, $repoId, "main" ), # main - 1
Get-RepoSecurityToken( $projectId, $repoId, "releases"), # releases/* - 2
Get-RepoSecurityToken( $projectId, $repoId, "feature"), # feature/* - 3
Get-RepoSecurityToken( $projectId, $repoId, "users") # users/* - 4
)
$CreateBranch = 16
# prevent contributors from creating branches at the root of the repository
Set-BranchSecurity $contributors, $tokens[0], -deny $CreateBranch
# limit users to only create feature and user branches
Set-BranchSecurity $contributors, $tokens[3], -allow $CreateBranch
Set-BranchSecurity $contributors, $tokens[4], -allow $CreateBranch
# restrict who can create a release
Set-BranchSecurity $admins, $token[2], -allow $CreateBranch
# allow admins to recreate master/main
Set-BranchSecurity $admins, $token[1], -allow $CreateBranch
</pre>
<p>To improve upon this, you could describe each of these expressions as a ruleset in a JSON format and apply them to all the repositories in a project.</p>
<p>Some considerations:</p>
<ul>
<li>Granting Build Service accounts permissions to create Tags</li>
<li>Empowering the Project Collection Administrators with the ability to override branch policy</li>
<li>Empowering certain Teams or Groups with the ability to delete feature or user branches</li>
</ul>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-15138132401716637332020-08-07T08:16:00.003-04:002020-08-13T14:15:32.918-04:00Running Azure DevOps CLI from an Azure Pipeline<a data-flickr-embed="true" href="https://www.flickr.com/photos/141857125@N02/30163620418/in/photolist-MXsrt1-4DZov3-6DvRu-afyHku-DfSmD-bzabKk-bmfmjQ-bmfies-bmfmXY-bzadk2-bzacse-bzaani-bmfodJ-7K7uSZ-bveXAF-bnhEcd-brNT2h-nWxXnC-7KbqPq-bp7mFC-bmgdU4-7K7v4k-brNPD3-bmg9Xv-bmNmLv-bmNnun-bmNm3M-brNUJQ-bmgeCM-brNTw7-bEHFXk-brNRYb-7K7vfD-brNQLU-bnhD73-bmNkmD-brNMnm-bEHERB-bmfoSE-bmgbwv-brNQeG-bEHDaR-bmfnBh-bnhFnj-brNWC3-bEHNep-bEHLXZ-brNSyG-bEHFon-bEHDFr" title="pipelines"><img src="https://live.staticflickr.com/1820/30163620418_ff376f2b6a_c.jpg" width="800" height="550" alt="pipelines"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <p>Having automation to perform common tasks is great. Having that automation run on a regular basis in the cloud is awesome.</p> <p>Today, I'd like to expand upon the <a href="http://www.bryancook.net/2020/07/managing-ado-licenses-from-azure-devops.html" target="_blank">sweet Azure CLI script to manage Azure DevOps User Licenses I wrote</a> and put it in a Azure Pipeline. The details of that automation script are <a href="http://www.bryancook.net/2020/07/managing-ado-licenses-from-azure-devops.html" target="_blank">outlined in my last post</a>, so take the time to check that out if you're interested, but to recap: my azure cli script activates and deactivates Azure DevOps user licenses if they’re not used. Our primary focus in this post will outline how you can configure your pipeline to run your az devops automation on a reoccurring schedule.</p> <h2>Table of Contents</h2> <ul> <li><a href="#about-pipeline-security">About Pipeline Security</a></li> <li><a href="#setup-pat-token">Setup a PAT Token</a></li> <li><a href="#secure-access-to-tokens">Secure Access to Tokens</a></li> <li><a href="#create-pipeline">Create the Pipeline</a></li> <li><a href="#define-schedule">Define the Schedule</a></li> <li><a href="#authenticate-using-pat">Authenticate using the PAT Token</a></li> <li><a href="#run-powershell-script-from-ado">Run PowerShell Script from ADO</a></li> <li><a href="#combining-with-azure-cli">Combining with Azure CLI</a></li> </ul> <h2><a name="about-pipeline-security">About Pipeline Security</a></h2> <p>When our pipelines run, they operate by default using a project-specific user account: <em><Project Name> Build Service (<Organization Name>)</em>. For security purposes, this account is restricted to information within the Project.</p> <p>If your pipelines need to access details beyond the current Project they reside in, for example if you a pipeline that needs access to repositories in other projects, you can configure the Pipeline to use the <em>Project Collection Build Service (<Organization Name>)</em>. This change is subtly made by toggling off the "Limit job authorization scope to current project for non-release pipelines"  (Project Settings -> Pipelines : Settings)</p> <p><a href="https://drive.google.com/uc?id=1kgTF6_kAwPtwF0Us16e2mrTXJ8qDgJsT"><img title="limit-job-scope" style="display: inline; background-image: none;" border="0" alt="limit-job-scope" src="https://drive.google.com/uc?id=17MNd8pzVnjHNmk_GBk1TFpPlKiHaRysh" width="560" height="249" /></a></p> <p>In both Project or Collection level scenarios, the security context of the build account is made available to our pipelines through the <em>$(System.AccessToken)</em> variable. There's a small trick that's needed to make the access token available to our PowerShell scripts and I'll go over this later. But for the most part, if you're only accessing information about pipelines, code changes or details about the project, the supplied Access Token should be sufficient. In scenarios where you're trying to alter elements in the project, you may need to grant some additional permissions to the build service account.</p> <p>However, for the purposes of today's discussion, we want to modify user account licenses which requires the elevated permissions of a <em>Project Collection Administrator</em>. I need to stress this next point: <strong>do not place the <em>Project Collection Build Service</em> in the <em>Project Collection Administrators</em> group</strong>. You're effectively granting any pipeline that uses this account full access to your organization. Do not do this. Here by dragons.</p> <p>Ok, so if the <em>$(System.AccessToken)</em> doesn't have the right level of access, we need an alternate access token that does.</p> <h2><a name="setup-pat-token">Setup a PAT Token</a></h2> <p>Setting up Personal Access Tokens is a fairly common activity, so <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=preview-page" target="_blank">I'll refer you to this document on how the token is created</a>. As we are managing users and user licenses, we need a PAT Token created by a Project Collection Administrator with the <em>Member Entitlement Management</em> scope:</p> <p><a href="https://drive.google.com/uc?id=1lUVWCHnCCvoIbhCKXKWQ18BQEdx0gb-W"><img title="pat-token-member-entitlement-management" style="display: inline; background-image: none;" border="0" alt="pat-token-member-entitlement-management" src="https://drive.google.com/uc?id=11rqfiTgPPHLzv63A7CWs3avP6Eu8ty8g" width="420" height="98" /></a></p> <h2><a name="secure-access-to-tokens">Secure Access to Tokens</a></h2> <p>Now that we have the token that can manage user licenses, we need to put it somewhere safe. Azure DevOps offers a few good options here, each with increasing level of security and complexity:</p> <ul> <li>Place the token as a secret in the Pipeline </li> <li>Place the token in a Variable Group</li> <li>Place the token in a Variable Group that uses an Azure KeyVault </li> <li>Use a marketplace extension to <a href="https://marketplace.visualstudio.com/search?term=hashicorp&target=AzureDevOps&category=Azure%20Pipelines&sortBy=Relevance" target="_blank">fetch the secret from an alternate secrets provider</a> (eg HashiCorp KeyVault)</li> </ul> <p><a href="http://www.bryancook.net/2020/06/keeping-your-secrets-safe-in-azure.html">My personal go-to are Variable Groups</a> because they can be shared across multiple pipelines. Variable Groups also have their own Access Rights, so the owner of variable group must authorize which pipeline and users are allowed to use your secrets.</p> <p>For our discussion, we'll create a variable group "AdminSecrets" with a variable "ACCESS_TOKEN".</p> <h2><a name="create-pipeline">Create the Pipeline</a></h2> <p>With our security concerns locked down, let's create a new pipeline (Pipelines -> Pipelines -> New Pipeline) with some basic scaffolding that defines both the machine type and access to our variable group that has my access token.</p> <pre class="brush: bash; gutter: false; toolbar: false;">name: Manage Azure Licenses
trigger: none
pool:
vmimage: 'ubuntu-latest'
variables:
- group: AdminSecrets
</pre>
<p>I want to call out that by using a Linux machine, we're using PowerShell Core. There are some subtle differences between PowerShell and PowerShell Core, so I would recommend that you always write your scripts locally against PowerShell Core.</p>
<h2><a name="define-shedule">Define the Schedule</a></h2>
<p>Next, we'll setup the schedule for the pipeline using a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml#supported-cron-syntax" target="_blank">cron job schedule syntax</a>.</p>
<p>We'll configure our pipeline to run every night as midnight:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">schedules:
# run at midnight every day
- cron: "0 0 * * *"
displayName: Check user licenses (daily)
branches:
include:
- master
always: true
</pre>
<p>By default, schedule triggers only run if there are changes, so we need to specify "always: true" to have this script run consistently.</p>
<h2><a name="authenticate-using-pat">Authenticate Azure DevOps CLI using PAT Token</a></h2>
<p>In order to invoke our script that uses az devops functions, we need to setup the Azure DevOps CLI to use our PAT Token. As a security restriction, Azure DevOps does not make secrets available to scripts so we need to explicitly pass in the value as an environment variable.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">- script: |
az extension add -n azure-devops
displayName: Install Azure DevOps CLI
- script: |
echo $(ADO_PAT_TOKEN) | az devops login
az devops configure --defaults organization=$(System.CollectionUri)
displayName: Login and set defaults
env:
ADO_PAT_TOKEN: $(ACCESS_TOKEN)
</pre>
<h2><a name="run-powershell-script-from-ado">Run PowerShell Script from ADO</a></h2>
<p>Now that our pipeline has the ADO CLI installed, we're authenticated using our secure PAT token, our last step is to invoke the powershell script. Here I'm using the <em>pwsh</em> task to ensure that PowerShell Core is used. The "<em>pwsh</em>" task is a shortcut syntax for the standard <em>powershell</em> task.</p>
<p>Our pipeline looks like this:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">name: Manage Azure Licenses
trigger: none
schedules:
# run at midnight every day
- cron: "0 0 * * *"
displayName: Check user licenses (daily)
branches:
include:
- master
always: true
pool:
vmImage: 'ubuntu-latest'
variables:
- group: AdminSecrets
steps:
- script: |
az extension add -n azure-devops
displayName: Install Azure DevOps CLI
- script: |
echo $(ADO_PAT_TOKEN) | az devops login
az devops configure --defaults organization=$(System.CollectionUri)
displayName: Login and set defaults
env:
ADO_PAT_TOKEN: $(ACCESS_TOKEN)
- pwsh: .\manage-user-licenses.ps1
displayName: Manage User Licenses
</pre>
<h2><a name="combining-with-azure-cli">Combining with the Azure CLI</a></h2>
<p>Keen eyes may recognize that my manage-users-licenses.ps1 <a href="http://www.bryancook.net/2020/07/managing-ado-licenses-from-azure-devops.html" target="_blank">from my last post</a> also used<em> </em>the Azure CLI to access Active Directory, and because <em>az login</em> and <em>az devops login</em> are two separate authentication mechanisms, the approach described above won’t work in that scenario. To support this, we’ll also need:</p>
<ul>
<li>A service-connection from Azure DevOps to Azure (a Service Principal with access to our Azure Subscription)</li>
<li>Directory.Read.All role assigned to the Service Principal</li>
<li>A script to authenticate us with the Azure CLI.</li>
</ul>
<p>The built-in <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-cli?view=azure-devops" target="_blank">AZ CLI Task</a> is probably our best option for this, as it provides an easy way to work with our Service Connection. However, because this task clears the authentication before and after it runs, we have to change our approach slightly and execute our script logic within the script definition of this task. The following shows an example of how we can use both the Azure CLI and the Azure DevOps CLI in the same task:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">- task: AzureCLI@2
inputs:
azureSubscription: 'my-azure-service-connection'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
echo $(ACCESS_TOKEN) | az devops login
az devops configure --defaults organization=$(SYSTEM.COLLECTIONURI) project=$(SYSTEM.TEAMPROJECT)
az pipelines list
az ad user list
</pre>
<p>If we need to run multiple scripts or break-up the pipeline into smaller tasks as I illustrated above, we’ll need a different approach where we have more control over the authenticated context. I can dig into this in another post.</p>
<h2>Wrap Up</h2>
<p>As I’ve outlined in this post, we can take simple PowerShell automation that leverages the Azure DevOps CLI and run it within an Azure Pipeline securely and on a schedule.</p>
<p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-69492153718728226022020-07-29T08:00:00.000-04:002020-08-26T18:00:23.862-04:00Managing ADO Licenses from Azure DevOps CLI<p>My <a href="http://www.bryancook.net/" target="_blank">last post introduced using JMESPath with the az devops cli</a><em>, </em>which hopefully gave you some insight into use the <em>az cli </em>and the <em>az devops</em> extension. Today I want to highlight how you can easily pull the <em>az devops cli</em> into <em>PowerShell</em> to unlock some amazing scripting ability.</p> <p>A good example of this is how we can use <em>PowerShell + az devops cli</em> to manage Azure DevOps User Licenses.</p> <h2>Background</h2> <p>Azure DevOps is a licensed product, and while you can have unlimited free Stakeholder licenses, any developer that needs access to code repositories needs a Basic license, which costs about $7 CAD / month. </p> <p>It's important to note that this cost is not tied to usage, so if you've allocated licenses manually, you're essentially paying for it. Interestingly, this is not a fixed monthly cost, but prorated on a daily basis in the billing period. So if you can convert Basic licenses to Stakeholder licenses, you’ll only pay for the days in the billing period when the license was active. </p> <p>If you establish a process to revoke licenses when they're not being used, you can save your organization a few dollars that would otherwise be wasted. It might not be much, but if you consider 10 user licenses for the year is about $840 – the costs do add up. something you could argue should be added to your end-of-year bonus.</p> <h2>Integrating the Azure DevOps CLI into PowerShell</h2> <p>To kick things off and to show how incredibly easy this is, let's start with this snippet:</p> <pre class="brush: powershell; gutter: false; toolbar: false;">param() {
}
$users = az devops user list | ConvertFrom-Json
Write-Host $users.totalCount
Write-Host $users.items[0].user.principalName
</pre>
<p>Boom. No special magic. We just call the <em>az devops cli</em> directly in our <em>PowerShell</em> script, converting the JSON result into an object by piping it through the <em>ConvertFrom-Json</em> commandlet. We can easily interrogate object properties and build up some conditional logic. Fun.</p>
<h2>Use JMESPath to Simplify Results</h2>
<p>While we could work with the results in this object directly, the result objects have a complex structure so I’m going to flatten the object down to make it easier to get at the properties we need. I only want the user’s email, license, and the dates when the user account was created and last accessed. </p>
<p>JMESPath makes this easy. If you missed the last post, go back and have a read to get familiar with this syntax.</p>
<pre class="brush: powershell; gutter: false; toolbar: false;">function Get-AzureDevOpsUsers() {
$query = "items[].{license:accessLevel.licenseDisplayName, email:user.principalName, dateCreated:dateCreated, lastAccessedDate:lastAccessedDate }"
$users = az devops user list --query "$query" | ConvertFrom-Json
return $users
}
</pre>
<p>Ok. That structure’s looking a lot easier to work with.</p>
<h2>Finding Licenses to Revoke</h2>
<p>Now all we need to do is find the licenses we want to convert from <em>Basic</em> to <em>Stakeholder</em>. Admittedly, this could create headaches for us if we're randomly taking away licenses that people might need in the future, but we should be safe if we target people who've never logged in or haven't logged in for 21 days.</p>
<p>I’m going to break this down into two separate functions. One to filter the list of users based on their current license, and another function to filter based on the last access date. </p>
<pre class="brush: ps; gutter: false; toolbar: false;">function Where-License()
{
param(
[Parameter(Mandatory=$true, ValueFromPipeline)]
$Users,
[Parameter(Mandatory=$true)]
[ValidateSet('Basic', 'Stakeholder')]
[string]$license
)
BEGIN{}
PROCESS
{
$users | Where-Object -FilterScript { $_.license -eq $license }
}
END{}
}
function Where-LicenseAccessed()
{
param(
[Parameter(Mandatory=$true, ValueFromPipeline)]
$Users,
[Parameter()]
[int]$WithinDays = 21,
[Parameter()]
[switch]$NotUsed = $false
)
BEGIN
{
$today = Get-Date
}
PROCESS
{
$Users | Where-Object -FilterScript {
$lastAccess = (@( $_.lastAccessedDate, $_.dateCreated) |
ForEach-Object { [datetime]$_ } |
Measure-Object -Maximum | Select-Object Maximum).Maximum
$timespan = New-TimeSpan -Start $lastAccess -End $today
if (($NotUsed -and $timespan.Days -gt $WithinDays) -or ($NotUsed -eq $false -and $timespan.Days -le $WithinDays)) {
Write-Host ("User {0} last accessed within {1} days." -f $_.email, $timespan.Days)
return $true
}
return $false
}
}
END {}
}
</pre>
<p>If you're new to PowerShell, the BEGIN,PROCESS,END blocks may look peculiar but they are essential for chaining results of arrays together. <a href="https://info.sapien.com/index.php/scripting/scripting-how-tos/take-values-from-the-pipeline-in-powershell" target="_blank">There’s a really good write-up on this here</a>. But to demonstrate, we can now chain these methods together, like so:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">Get-AzureDevOpsUsers |
Where-License -license Basic |
Where-LicenseAccessed -NotUsed -WithinDays 21
</pre>
<h2>Revoking Licenses</h2>
<p>And then we use the ever so important function to revoke licenses. This is simply a wrapper around the <em>az devops</em> command to improve readability.</p>
<pre class="brush: js; gutter: false; toolbar: false;">function Set-AzureDevOpsLicense()
{
param(
[Parameter(Mandatory=$true)]
$User,
[Parameter(Mandatory=$true)]
[ValidateSet('express','stakeholder')]
[string]$license
)
Write-Host ("Setting User {0} license to {1}" -f $_.email, $license)
az devops user update --user $user.email --license-type $license | ConvertFrom-Json
}
</pre>
<h2>Putting it all Together</h2>
<p>So we've written all these nice little functions, let's put them together into a little PowerShell haiku:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">$users = Get-AzureDevOpsUsers |
Where-License -license Basic |
Where-LicenseAccessed -NotUsed -WithinDays 21 |
ForEach-Object { Set-AzureDevOpsLicense -User $_ -license stakeholder }
Write-Host ("Changed {0} licenses." -f $users.length)
</pre>
<p>Nice. <em>Az</em> you can see that wasn't hard at all, and it probably saved my boss a few bucks. There's a few ways we can make this better...</p>
<h2>Dealing with lots of Users</h2>
<p>The <i>az devops user list</i> command can return up to 1000 users but only returns 100 by default. If you have a large organization, you'll need to make a few round trips to get all the data you need.</p>
<p>Let’s modify our <em>Get-AzureDevOpsUsers</em> function to retrieve all the users in the organization in batches.</p>
<pre class="brush: ps; gutter: false; toolbar: false;">function Get-AzureDevOpsUsers() {
param(
[Parameter()]
[int]$BatchSize = 100
)
$query = "items[].{license:accessLevel.licenseDisplayName, email:user.principalName, dateCreated:dateCreated, lastAccessedDate:lastAccessedDate }"
$users = @()
$totalCount = az devops user list --query "totalCount"
Write-Host "Fetching $totalCount users" -NoNewline
$intervals = [math]::Ceiling($totalCount / $BatchSize)
for($i = 0; $i -lt $intervals; $i++) {
Write-Host -NoNewline "."
$skip = $i * $BatchSize;
$results = az devops user list --query "$query" --top $BatchSize --skip $skip | ConvertFrom-Json
$users = $users + $results
}
return $users
}
</pre>
<h2>Giving licenses back</h2>
<p>If the script can taketh licenses away, it should also giveth them back. To do this, we need the means to identify who <em>should</em> have a license. This can be accomplished using an Azure AD User Group populated with all users that should have licenses.</p>
<p>To get the list of these users, the Azure CLI comes to the rescue again. Also again, learning JMESPath really helps us because we can simplify the entire result into a basic string array:</p>
<pre class="brush: ps; gutter: false; toolbar: false;">az login --tenant <tenantid>
$licensedUsers = az ad group member list -g <groupname> --query "[].otherMails[0]" | ConvertFrom-Json
</pre>
<p>Note that I'm using the <em>otherMails</em> property to get the email address, your mileage may vary, but in my Azure AD, this setting matches Members and Guests with their email address in Azure DevOps.</p>
<p>With this magic array of users, my haiku can now reassign users their license if they've logged in recently without a license (sorry mate):</p>
<pre class="brush: ps; gutter: false; toolbar: false;">$licensedUsers = az ad group member list -g ADO_LicensedUsers --query "[].otherMails[0]" | ConvertFrom-Json
$users = Get-AzureDevOpsUsers
$reactivatedUsers = $user | Where-License -license Stakeholder |
Where-LicenseAccessed -WithinDays 3 |
Where-Object -FilterScript { $licensedUsers.Contains($_.email) } |
ForEach-Object { Set-AzureDevOpsLicense -User $_ -license express }
$deactivatedUsers = $user | Where-License -license Basic |
Where-LicenseAccessed -NotUsed -WithinDays 21 |
ForEach-Object { Set-AzureDevOpsLicense -User $_ -license stakeholder }
Write-Host ("Reviewed {0} users" -f $users.Length)
Write-Host ("Deactivated {0} licenses." -f $deactivatedUsers.length)
Write-Host ("Reactivated {0} licenses." -f $reactivatedUsers.length)
</pre>
<h2>Wrapping up</h2>
<p>In the last few posts, we've looked at the Azure DevOps CLI, understanding JMESPath and now integrating both into PowerShell to unleash some awesome. If you're interested in the source code for this post, <a href="https://github.com/bryanbcook/blog-code-samples/tree/master/manage-ado-licenses" target="_blank">you can find it here</a>.</p>
<p>In <a href="http://www.bryancook.net/2020/08/running-azure-devops-cli-from-azure.html" target="_blank">my next post</a>, we'll build upon this and show you how to integrate this script magic into an Azure Pipeline that runs on a schedule.</p>
<p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-51725465511853396722020-07-27T08:15:00.003-04:002020-07-27T15:19:52.158-04:00Azure DevOps CLI Examples<p>I've always been a fan of Azure DevOps's extensive REST API -- it's generally well documented, consistent and it seems like you can do pretty much anything you can do from within the web-interface. As much as I love the API, I hate having to bust it out. Nowadays, my new go to tool is the Azure DevOps CLI.</p> <p>The Azure DevOps CLI is actually an extension of the Azure CLI. It contains a good number of common functions that you would normally use on a daily basis, plus features that I would normally rely on the REST API for. Its real power is unlocked when it's combined with your favourite scripting language. I plan to write a few posts on this topic, so stay tuned, but for today, we'll focus on getting up and running plus some cool things you can do with the tool.</p> <h2>Installation</h2> <p>Blurg. I hate installation blog posts. Let's get this part over with:</p> <pre class="brush: bash; gutter: false; toolbar: false;">choco install azure-cli -y
az extension add --name azure-devops
</pre>
<p>Whew. If you don't have Chocolatey installed, go here: https://chocolatey.org/install </p>
<h2>Get Ready</h2>
<p>Ok, so we're almost there. Just a few more boring bits. First we need to login:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az login --allow-no-subscription</pre>
<p>A quick note on the above statement. There are a number of different login options available but I've found az login with the --allow-no-subscription flag supports the majority of use cases. It'll launch a web-browser and require you to login as you normally would, and the --allow-no-subscription supports scenarios where you have access to the AD tenant to login but you don't necessarily have a subscription associated to your user account, which is probably pretty common for most users who only have access to Azure DevOps.</p>
<p>This next bit let's us store some commonly used parameters so we don't have to keep typing them out.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops configure --defaults organization=https://dev.azure.com/<organization>
</pre>
<p>In case your curious, this config file is stored in %UserProfile%\.azure\azuredevops\config</p>
<h2>Our First Command</h2>
<p>Let's do something basic, like getting a list of all projects:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project list
</pre>
<p>If we've configured everything correctly, you should see a boatload of JSON fly by. The CLI supports different options for output, but JSON works best when paired with our automation plus there's some really cool things we can do with the result output by passing a JMESPath statement using the --query flag.</p>
<h2>Understanding JMESPath</h2>
<p>The JavaScript kids get all the cool tools. There are probably already a few dozen different ways of querying and manipulating JSON data, and JMESPath (pronounced James Path) is no different. The syntax is a bit confusing at first and it takes a little bit of tinkering to master it. So let's do some tinkering.</p>
<p>The best way to demonstrate this is to use the JSON output from listing our projects. Our JSON looks something like this:</p>
<pre class="brush: js; gutter: false; toolbar: false;">{
"value": [
{
"abbreviation": null,
"defaultTeamImageUrl": null,
"description": null,
"id": "<guid>",
"lastUpdateTime": "<date>",
"name": "Project Name",
"revision": 89,
"state": "wellFormed",
"url": "<url>",
"visibility": "private"
},
...
]
}
</pre>
<p>It's a single object with a property called "value" that contains an array. Let's do a few examples...</p>
<h4>Return the contents of the array</h4>
<p>Assuming that we want the details of the projects and not the outside wrapper, we can discard the "value" property and just get it's contents, which is an array.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project list --query "value[]"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">[
{
"abbreviation": null,
"defaultTeamImageUrl": null,
"description": null,
"id": "<guid>",
"lastUpdateTime": "<date>",
"name": "Project Name",
"revision": 89,
"state": "wellFormed",
"url": "<url>",
"visibility": "private"
},
...
]
</pre>
<h4>Return just the first element</h4>
<p>Because the "value" property is an array, we can get the first element.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project --query "value[0]"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">{
"abbreviation": null,
"defaultTeamImageUrl": null,
"description": null,
"id": "<guid>",
"lastUpdateTime": "<date>",
"name": "Project Name",
"revision": 89,
"state": "wellFormed",
"url": "<url>",
"visibility": "private"
}
</pre>
<p>You can also specify ranges:</p>
<ul>
<li>[:2] = everything up to the 3rd item</li>
<li>[1:3] = from the 2nd up to the 4th items</li>
<li>[1:] = everything from the 2nd item</li>
</ul>
<h4>Return an array of properties</h4>
<p>If we just wanted the id property of each element in the array, we can specify the property we want. The result assumes there are only 4 projects.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project --query "value[].id"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">[
"<guid>",
"<guid>",
"<guid>",
"<guid>"
]
</pre>
<h4>Return specific properties</h4>
<p>This is where JMESPath gets a tiny bit odd. In order to get just a handful of properties we need to do a "projection" which is basically like stating what structure you want the JSON result to look like. In this case, we're mapping the id and name property to projectId and projectName in the result output.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project --query "value[].{ projectId:id, projectName:name }"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">[
{
"projectId": "<guid>",
"projectName": "Project 1"
},
{
"projectId": "<guid>",
"projectName": "Project 2"
},
...
]
</pre>
<h4>Filter the results</h4>
<p>Here's where things get really interesting. We can put functions inside the JMESPath query to filter the results. This allows us to mix and match the capabilities of the API with the output filtering capabilities of JMESPath. This returns only the projects that are public.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project list --query "value[?visibility=='public'].{ id:id, name:name }"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">[
{
"id": "<guid>",
"name": "Project 3"
}
]
</pre>
<p>We could have also written this as:</p>
<pre class="brush: js; gutter: false; toolbar: false;">--query "value[?contains(visibility,'private')].{id:id, name:name}"
</pre>
<h4>Piping the results</h4>
<p>In the above example, JMESPath assumes that the results will be an array. We can pipe the result to further refine it. In this case, we want just the first object in the resulting array.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">az devops project list --query "value[?visibility=='private'].{ id:id, name:name} | [0]"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">{
"id": "<guid>",
"name": "Project 3"
}
</pre>
<p>Piping can improve the readability of the query similar to a functional language. For example, the above could be written as a <i>filter</i>, followed by a <i>projection</i>, followed by a <i>selection</i>.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">--query "value[?contains(visibility,'private')] | [].{id:id, name:name} | [0]"</pre>
<h4>Wildcard searches</h4>
<p>Piping the results becomes especially important if we want just the single value of a wildcard search. For this example, I need a different JSON structure, specifically a security descriptor:</p>
<pre class="brush: js; gutter: false; toolbar: false;">[
{
"acesDictionary": {
"Microsoft.IdentityModel.Claims.ClaimsIdentity;<dynamic-value>": {
"allow": 16,
"deny": 0,
"descriptor": "Microsoft.IdentityModel.Claims.ClaimsIdentity;<dynamic-value>",
"extendedInfo": {
"effectiveAllow": 32630,
"effectiveDeny": null,
"inheritedAllow": null,
"inheritedDeny": null
}
}
},
"includeExtendedInfo": true,
"inheritPermissions": true,
"token": "repoV2"
}
]
</pre>
<p>In this structure, i'm interested in getting the "allow" and "deny" and "token" values but the first element in the acesDictionary contains a dynamic value. We can use a wildcard "*" to substitute for properties we don't know at runtime.</p>
<p>Let's try to isolate that "allow". The path would seem like <i>[].acesDictionary.*.allow</i> but because JMESPath has no idea if this is a single element, so it returns an array:</p>
<pre class="brush: js; gutter: false; toolbar: false;">[
[
16
]
]
</pre>
<p>If we pipe the result, <i>[].acesDictionary.*.allow | [0]</i> we'll get a single value.</p>
<pre class="brush: js; gutter: false; toolbar: false;">[
16
]
</pre>
<p>Following suit and jumping ahead a bit so that I can skip to the answer, I can grab the "allow", "deny" and "token" with the following query. At this point, I trust you can figure this out using by referencing all the examples I've provided. The query looks like:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">--query "[].{allow:acesDictionary.*.allow | [0], deny:acesDictionary.*.deny | [0], token:token } | [0]"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">{
"allow": 16,
"deny": 0,
"token": "repoV2"
}
</pre>
<p>Ok! That is waay too much JMESPath. Let's get back on topic.</p>
<h2>Using the Azure DevOps CLI</h2>
<p>The Azure DevOps CLI is designed with commands and subcommands and has a few entry points. At each level, there are the obvious inclusions (list, add, delete, update, show), but there are a few additional commands per level.</p>
<ul>
<li>az devops</li>
<ul>
<li>admin</li>
<ul>
<li>banner</li>
</ul>
<li>extension</li>
<li>project</li>
<li>security</li>
<ul>
<li>group</li>
<li>permission</li>
<ul>
<li>namespace</li>
</ul>
</ul>
<li>service-endpoint</li>
<ul>
<li>azurerm</li>
<li>github</li>
</ul>
<li>team</li>
<li>user</li>
<li>wiki</li>
<ul>
<li>page</li>
</ul>
</ul>
<li>az pipelines</li>
<ul>
<li>agent</li>
<li>build</li>
<ul>
<li>definition</li>
<li>tag</li>
</ul>
<li>folder</li>
<li>pool</li>
<li>release</li>
<ul>
<li>definition</li>
</ul>
<li>runs</li>
<ul>
<li>artifact</li>
<li>tag</li>
</ul>
<li>variable</li>
<li>variable-group</li>
</ul>
<li>az boards</li>
<ul>
<li>area</li>
<ul>
<li>project</li>
<li>team</li>
</ul>
<li>iteration</li>
<ul>
<li>project</li>
<li>team</li>
</ul>
<li>work-item</li>
<ul>
<li>relation</li>
</ul>
</ul>
<li>az repos </li>
<ul>
<li>import</li>
<li>policy</li>
<ul>
<li>approver-count</li>
<li>build</li>
<li>case-enforcement</li>
<li>comment-required</li>
<li>file-size</li>
<li>merge-strategy</li>
<li>required-reviewer</li>
<li>work-item-linking</li>
</ul>
<li>pr</li>
<ul>
<li>policy</li>
<li>reviewer</li>
<li>work-item</li>
</ul>
<li>ref</li>
</ul>
<li>az artifacts</li>
<ul>
<li>universal</li>
</ul>
</ul>
<p>I won’t go into all of these commands and subcommands, I can showcase a few of the ones I’ve used the most recently…</p>
<h4>List of Projects</h4>
<pre class="brush: js; gutter: false; toolbar: false;">az devops project list --query "value[].{id:id, name:name}"
</pre>
<h4>List of Repositories</h4>
<pre class="brush: js; gutter: false; toolbar: false;">az repos list --query "[].{id:id, defaultBranch:defaultBranch, name:name}"
</pre>
<h4>List of Branch Policies</h4>
<pre class="brush: js; gutter: false; toolbar: false;">az repos policy list --project <name> --query "[].{name: type.displayName, required:isBlocking, enabled:isEnabled, repository:settings.scope[0].repositoryId, branch:settings.scope[0].refName}"
</pre>
<h4>Service Connections</h4>
<pre class="brush: js; gutter: false; toolbar: false;">az devops service-endpoint list --project <name> --query "[].name"
</pre>
<h2>One More Thing</h2>
<p>So while the az devops cli is pretty awesome, it has a hidden gem. If you can't find a supporting command in the az devops cli, you can always call the REST API directly from the tool using the <i>az devops invoke</i> command. There's a bit of <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-6.0" target="_blank">hunting through documentation and available endpoints</a> to find what you're looking for, but you can get a full list of what's available using the following:</p>
<pre class="brush: js; gutter: false; toolbar: false;">az devops invoke --query "[?contains(area,'build')]"
az devops invoke --query "[?area=='build' && resourceName=='timeline']"
</pre>
<pre class="brush: js; gutter: false; toolbar: false;">[
{
"area": "build",
"id": "8baac422-4c6e-4de5-8532-db96d92acffa",
"maxVersion": 6.0,
"minVersion": 2.0,
"releasedVersion": "5.1",
"resourceName": "Timeline",
"resourceVersion": 2,
"routeTemplate": "{project}/_apis/{area}/builds/{buildId}/{resource}/{timelineId}"
}
]
</pre>
<p>We can invoke this REST API call by passing in the appropriate area, resource, route and query-string parameters. Assuming I know the buildId of a recent pipeline run, the following shows me the state and status of all the stages in that build:</p>
<pre class="brush: js; gutter: false; toolbar: false;">az devops invoke
--area build
--resource Timeline
--route-parameters project=myproject buildId=2058 timelineid=''
--query "records[?contains(type,'Stage')].{name:name, state:state, result:result}"</pre>
<p><b>Tip:</b> the route and query parameters specified in the routeTemplate are case-sensitive.
</p>
<h2>More to come</h2>
<p>Today's post outlined how to make sense out of JMESPath and some cool features of the Azure DevOps CLI. My next few posts I'll dig deeper into using the cli in your favourite scripting tool</p>
<p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-39102756218613544052020-07-15T08:00:00.001-04:002020-08-13T14:29:07.832-04:00Exclusive Lock comes to Azure Pipelines<a data-flickr-embed="true" href="https://www.flickr.com/photos/8724931@N07/3661769378/in/photolist-6zzwsN-f9YGBz-fadW8U-BxqN7P-6aSTU2-6aX4hb-HwDry-4A1qiP-2jp2s1N-HwDrs-f9YGMk-53ctFL-52uHxe-fadWcy-igS1Pz-74q11s-9tAn4a-f9YGRV-fadW67-5nYM3V-67sb7C-9tAndz-7mKGM2-5o45um-cCHza5-3BeQnH-dn2JDR-538dxz-ddoJ2a-5o42hN-cCHzFu-gvX7B6-82fzXL-21XyWNV-gvWSfU-7qSmSK-74AG5L-5x9Vn7-9tDjHu-gvXa4R-6beHP8-5j9yQv-gvW3VS-6W8QtU-gvX8Hz-gvWS77-gvWzV8-gvX9Zc-cpg17u-gvX9aX" title="semaphore red left"><img src="https://live.staticflickr.com/2559/3661769378_acfdcbb8e7_c.jpg" width="800" height="536" alt="semaphore red left"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <p>As part of <a href="https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-171-update" target="_blank">Sprint 171</a>, the Azure DevOps team introduced a much needed feature for Multi-Stage YAML Pipelines, the Exclusive Lock "check" that can be applied to your environments. This feature silently slipped into existence without any mention of it in the release notes, but I was personally thrilled to see this. (At the time this post was written, <a href="https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-172-update" target="_blank">Sprint 172</a> announced this feature was available)</p> <p>Although Multi-Stage YAML Pipelines <a href="https://devblogs.microsoft.com/devops/announcing-general-availability-of-azure-pipelines-yaml-cd/" target="_blank">have been available for a while</a>, there are still some subtle differences between their functionality and what's available through Classic Release Pipelines. Fortunately over the last few sprints we've seen a few incremental features to help close that feature parity gap, with more to come. One of the missing features is something known as "Deployment Queuing Settings" -- a Classic Release pipeline feature that dictates how pipelines are queued and executed. The Exclusive Lock check solves a few pain points but falls short on some of the more advanced options.</p> <p>In this post, I'll walk through what Exclusive Locks are, how to use them and some other thoughts for consideration.</p> <h2>Deployments and Environments</h2> <p>Let's start with a multi-stage pipeline with a few stages, where we perform CI activities and each subsequent stage deploys into an environment. Although we could write our YAML to build and deploy using standard tasks, we're going to use the special "deployment" job that tracks builds against Environments.</p> <pre class="brush: plain; gutter: false; toolbar: false;">trigger:
- master
stages:
- stage: ci_stage
...steps to compile and produce artifacts
- stage: dev_stage
condition: and(succeeded(), eq(variables['Build.SourceBranch','refs/heads/master'))
dependsOn: ci_stage
jobs:
- deployment: dev_deploy
environment: dev
strategy:
runOnce:
deploy:
... steps to deploy
- stage: test_stage
dependsOn: dev_stage
...
</pre>
<p>If we were to run this hypothetical pipeline, the code would compile in the CI stage and then immediately start deploying into each environment in sequence. Although we definitely want to have our builds deploy into the environments in sequence, we might not want them to advance into the environments automatically. That's where Environment Checks come in.</p>
<h2>Environment Checks</h2>
<p>As part of multi-stage yaml deployments, Azure DevOps has introduced the concept of Environments which are controlled outside of your pipeline. You can set special "Checks" on the environment that must be fulfilled before the deployment can occur. On a technical note, environment checks bubble up from the deployment task to the stage, so the checks must be satisfied before the stage is allowed to start.</p>
<p>For our scenario, we're going to assume that we don't want to automatically go to QA, so we'll add an Approval Check that allows our testing team to approve the build before deploying into their environment. We'll add approval checks for the other stages, too. Yay workflow!</p>
<p><a href="https://drive.google.com/uc?id=12Kq1gqsUGT1xADl1x68SCkJ9aCJkZySw"><img title="approval-checks" style="display: inline; background-image: none;" border="0" alt="approval-checks" src="https://drive.google.com/uc?id=1MRzp4ANhsGZZILimzU5b5KPM4CU5eBJT" width="454" height="585" /></a></p>
<p>At this point, everything is great: builds deploy to dev automatically and then pause at the test_stage until the testing team approves. Later, we add more developers to our project and the frequency of the builds starts to pick up. Almost immediately, the single agent build pool starts to fill up with builds and the development team start to complain that they're waiting a really long time for their build validation to complete. </p>
<p>Obviously, we add more build agents. <strong>Chaos ensues.</strong></p>
<h2>What just happen'd?</h2>
<p>When we introduced additional build agents, we were expecting multiple CI builds to run simultaneously but we probably weren't expecting multiple simultaneous deployments! This is why the Exclusive Lock is so important.</p>
<p><img src="https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/media/172-pipelines-0-0.png" width="450" height="459" /></p>
<p>By introducing an Exclusive Lock, all deployments are forced to happen in sequence. Awesome. Order is restored.</p>
<p>There unfortunately isn't a lot of documentation available for the Exclusive Lock, but according to the description:</p>
<blockquote>
<p>“Adding an exclusive lock will only allow a single run to utilize this resource at a time. If multiple runs are waiting on the lock, only the latest will run. All others will be canceled."</p>
</blockquote>
<p>Most of this is obvious, but what does 'All others will be canceled' mean?</p>
<h2>Canceling Queued Builds</h2>
<p>My initial impression of the "all other [builds] will be canceled" got me excited -- I thought this was the similar to the “deploy latest and cancel the others” setting of Deployment Queuing Settings:</p>
<p><a href="https://drive.google.com/uc?id=13_rdYb5-RYRdvN9PTSs4vO-3lAV0ei5m"><img title="deployment-queue-settings" style="display: inline; background-image: none;" border="0" alt="deployment-queue-settings" src="https://drive.google.com/uc?id=1JUkPG3GqZXC364lQN0i8UcNK5PofbNUr" width="244" height="186" /></a></p>
<p>Unfortunately, this is not the intention of the Exclusive Lock. It focuses only on sequencing of the build, not on the pending queue. To understand what the <em>“all others will be canceled” </em>means, let's assume we have 3 available build agents and we'll use the <em>az devops CLI</em> to trigger three simultaneous builds.</p>
<pre class="brush: plain; gutter: false; toolbar: false;">az pipelines run --project myproject --name mypipeline
az pipelines run --project myproject --name mypipeline
az pipelines run --project myproject --name mypipeline
</pre>
<p>In this scenario, all three CI builds happen simultaneously but the fun happens when all three pipeline runs hit the dev_stage. As expected, the first pipeline takes the exclusive lock on the development environment while the deployment runs and the remaining two builds queue up waiting for the exclusive lock to be released. When the first build completes, the second build is automatically marked as canceled and the last build remains begins deployment.</p>
<p><a href="https://drive.google.com/uc?id=1dq1DvhECxQQUGh1ickm9vPqRg-ej_5YB"><img title="exclusive-lock-queuing" style="display: inline; background-image: none;" border="0" alt="exclusive-lock-queuing" src="https://drive.google.com/uc?id=1n20uGnQfwY9E2Oog1mGzmf1ZfatXzyFG" width="171" height="244" /></a></p>
<p>This is awesome. However I was really hoping that I could combine the Exclusive Lock with the Approval Gate to recreate the same functionality of the Deployment Queuing option: approving the third build would cancel the previous builds. Unfortunately, this isn’t the case. I’m currently evaluating whether I can write some deployment automation in my pipeline to cancel other pending builds.</p>
<h2>Wrapping Up</h2>
<p>In my opinion, Exclusive Locks are a hidden gem of Sprint 171 as they’re essential if you’re automatically deploying into an environment without an Approval Gate. This feature recreates the “deploy all in sequence” feature of Classic Release Pipelines. The jury is still out on canceling builds from automation. I’ll keep you posted.</p>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-15668753164927705312020-07-14T08:00:00.000-04:002020-07-14T10:53:54.850-04:00Using Templates to improve Pull Requests and Work-Items (Part 2)<p>In my <a href="http://www.bryancook.net/" target="_blank">previous post</a>, I outlined how to setup templates for pull-requests. Today we’ll focus on how to configure work-items with some project-specific templates. We’ll also look at how you can create these customizations for all projects within the enterprise and the motivations for doing so.</p> <p>While having a good pull-request template can improve clarity and reduce the effort needed to approve pull-requests, having well defined work-items are equally as important as they can drive and shape the work that needs to happen. We can define templates for our work-items to encourage work-item authors to provide the right level of detail (such as steps to reproduce a defect) or we can use templates to reduce effort for commonly created work-items (such as fields that are common set when creating a technical-debt work-item).</p> <h2>Creating Work-Item Templates</h2> <p>Although you can define pull-request templates as files in your git repository, Azure DevOps doesn’t currently support the ability to customize work-items as managed source files. This is largely due to the complexity of work-items structure and the level of customization available, so our only option to date is to manipulate the templates through the Azure Boards user-interface. Fortunately, it’s relatively simple and there are a few different ways you can setup and customize your templates – you can either specify customizations through the Teams configuration for your Project, or you can extract a template from an existing work item.</p> <p>As extracting from an existing work-item is easier, we’ll look at this first.</p> <h3>Creating Templates from Existing Work-Items</h3> <p>To create a template from an existing work item, simply create a new work-item that represents the content that you’d like to see in your template. The great news is that our template can capture many different elements, ranging from the description and other commonly used fields to more specialized fields like Sprint or Labels.</p> <p>It’s important to note that templates are team specific, so if you’re running a project with multiple scrum teams, each team can self-organize and create templates that are unique to their needs.</p> <p>Here’s an example of user story with the description field pre-defined:</p> <p><a href="https://drive.google.com/uc?id=1vJN8Qw7xqlN2uFbaQgffVwHlqCznVCmI"><img title="work-item-example" style="display: inline; background-image: none;" border="0" alt="work-item-example" src="https://drive.google.com/uc?id=1AiFKIbKk-P309TbqL3qLIlhvwa3Z3lyn" width="454" height="337" /></a></p> <p>Once we like the content of the story, we can convert it into a template using the ellipsis menu (…) Templates –> Capture:</p> <p><a href="https://drive.google.com/uc?id=1gO3XOjee5oA_ZYtnzw2GdnraT2MWEKu1"><img title="work-item-capture-template" style="display: inline; background-image: none;" border="0" alt="work-item-capture-template" src="https://drive.google.com/uc?id=1tnlZKqQ7kfFPAzH-5ngR1ge8VdzzL07z" width="454" height="569" /></a></p> <p>The capture dialog allows us to specify which fields we want to include in our template. This typically populates with the fields that have been modified, but you can remove or add any additional fields you want:</p> <p><a href="https://drive.google.com/uc?id=1vdkssv0sXX5Fz7sI0R_31nDtHj481z9a"><img title="capture-template-dialog" style="display: inline; background-image: none;" border="0" alt="capture-template-dialog" src="https://drive.google.com/uc?id=14_PhofurF5bAdR7HXAyUyJ0l5fMK53nb" width="454" height="486" /></a></p> <p>As some fields are stored in the template as HTML, using this technique of creating a template from an existing work-item is especially handy.</p> <h3>Customizing Templates</h3> <p>Once you’ve defined the template, you find them in<em> Settings –> Team Configuration</em>. There’s a sub-navigation item for <em>Templates.</em></p> <p><a href="https://drive.google.com/uc?id=1dFLoVLR0uFHvFKaTYtyBLx70Vbbxqwlv"><img title="edit-template" style="display: inline; background-image: none;" border="0" alt="edit-template" src="https://drive.google.com/uc?id=1-oUe8YfY7iEsw7zNXCguh0btnLa3FGGI" width="644" height="394" /></a></p> <h2>Applying Templates</h2> <p>Once you have the template(s) created, there are a few ways you can apply them to your work-items: you can apply the template to the work-item while you’re editing it, or you can apply it to the work-item from the backlog. Both activities are achieved using the ellipsis menu: <em>Templates –> <template-name></em>. </p> <p>The latter option of applying the template from the Backlog is extremely useful because you can apply the template to multiple items at the same time. </p> <p><a href="https://drive.google.com/uc?id=1C_UHgC_hp4BcgbpbWfV_PArCCZBwWXfg"><img title="assign-template-from-backlog" style="display: inline; background-image: none;" border="0" alt="assign-template-from-backlog" src="https://drive.google.com/uc?id=1fXZxXEpp7jvsPD8EYRstj-if_31XtCNu" width="644" height="313" /></a></p> <p>With some creative thinking, templates can be used like macros for commonly performed activities. For example, I created a “Technical Debt” template that adds a <em>TechDebt </em>tag, lowered priority and changes the <em>Value Area</em> to <em>Architectural.</em></p> <h2>Creating Work-Items from Templates</h2> <p>If you want to apply the template to work-items as you create them, you’ll need to navigate to a special URL that is provided with each template (Settings –> Boards: Team Configuration –> Templates). </p> <p><a href="https://drive.google.com/uc?id=131UbVenM4spLtG7qnZR55W3kLk22wZtH"><img title="get-link-for-template" style="display: inline; background-image: none;" border="0" alt="get-link-for-template" src="https://drive.google.com/uc?id=10mGVcCT_97aGduTsmJQhJ9q6T4dtCEuq" width="644" height="303" /></a></p> <p>The <em>Copy Link</em> option copies the unique URL to the template to the clipboard, which you can circulate to your team. Personally, I like to create a Markdown widget on my dashboard that allows team members to navigate to this URL directly.</p> <p><a href="https://drive.google.com/uc?id=1S2u8rTMKHHNSJQlvoQjpyMx7GYzLuzg9"><img title="create-work-item-from-dashboard" style="display: inline; background-image: none;" border="0" alt="create-work-item-from-dashboard" src="https://drive.google.com/uc?id=1W_jvTfVUqmmj49aKXRNiUKGrsgj6sKYe" width="454" height="413" /></a></p> <h2>Going Further – Set defaults for Process Template</h2> <p>Unfortunately, there’s no mechanism to specify which work-item template should be used as the default for a team. You can however provide these customizations at the <em>Process</em> level, which applies these settings for all teams using that process template. Generally speaking, you should only make these changes for enterprise-wide changes.</p> <p>Note that you can’t directly edit the default process templates, you will need to create a new process template based on the default: Organization Settings –> Boards –> Process:</p> <p><a href="https://drive.google.com/uc?id=1RFfjIXjGJrVT_xcFHJN35AgqeC6QVfJT"><img title="process-template" style="display: inline; background-image: none;" border="0" alt="process-template" src="https://drive.google.com/uc?id=1tV1n7TJ_DoUOimfCmxXZRgaLlv6M_S2O" width="454" height="306" /></a></p> <p>Within the process, you can bring up any of the work-items into an editor that let’s you re-arrange the layout and contents of the work-item. To edit the Description field to have a default value, we select the <em>Edit </em>option in the ellipsis menu:</p> <p><a href="https://drive.google.com/uc?id=1x3I23OndBusg-w3CJzYDieXqbaCBI6fi"><img title="edit-process-template" style="display: inline; background-image: none;" border="0" alt="edit-process-template" src="https://drive.google.com/uc?id=1KWemEN2PrLHWak2eOxBFLaioD0LyXeIj" width="454" height="253" /></a></p> <p>Remembering that certain fields are HTML, we can set the default for our user story by modifying the default options:</p> <p><a href="https://drive.google.com/uc?id=1JkpUutnsP0OOWRixUQMgEyGy1NnIo8jq"><img title="edit-process-template-field" style="display: inline; background-image: none;" border="0" alt="edit-process-template-field" src="https://drive.google.com/uc?id=1kBuYcxKqRwfpzrqCv2mThgubuc1HHxw8" width="454" height="261" /></a></p> <h2>Wrapping up</h2> <p>Hopefully the last two posts for providing templates for pull requests and work-item templates has given you some ideas on how to quickly provide some consistency to your projects.</p> <p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-11634055741789839072020-06-29T08:00:00.000-04:002020-07-14T10:54:44.497-04:00Using Templates to improve Pull Requests and Work-Items (Part 1)<p>I’m always looking for ways to improve the flow of work. I have a few posts I want to share on using templates for pull requests and work-items. Today, I want to focus on some templates that you can add to your Azure DevOps pull requests to provide some additional context for the work.</p> <h2>Templates for Pull Requests</h2> <p>Pull Requests are a crucial component of our daily work. They help drive our continuous delivery workflows and because they’re accessible from our git history long after the pull-request has been completed, they can serve as an excellent reference point for the work. If you review a lot of pull-requests in your day, a well-written pull-request can make the difference between a good and bad day.</p> <p>Not many folks realize that Azure DevOps supports pre-populating your pull request with a default template. It can even provide customized messages for specific branches. And because Pull Requests for Azure Repos support markdown, you can provide a template that encourages your team to provide the right amount of detail <em>(and look good, too)</em>.</p> <h3>Default Pull Request Template</h3> <p>To create a single template for all your pull requests, create a markdown file named <em>pull_request_template.md</em> and place it in the root of your repository or in a folder named either <em>.azuredevops, .vsts, </em>or <em>docs. </em>For example:</p> <ul> <li>.azuredevops/pull_request_template.md</li> <li>.vsts/pull_request_template.md</li> <li>docs/pull_request_template.md</li> <li><root>/pull_request_template.md</li> </ul> <p>A sample pull request might look like:</p> <pre class="brush: plain; gutter: false; toolbar: false;">----
Delete this section before submitting!
Please ensure you have the following:
- PR Title is meaningful
- PR Title includes work-item number
- Required reviewers is populated with people who must review these changes
- Optional reviewers is populated with individuals who should be made aware of these changes
----
# Summary
_Please provide a high-level summary of the changes for the changes and notes for the reviewers_
- [ ] Code compiles without issues or warnings
- [ ] Code passes all static code-analysis (SonarQube, Fortify SAST)
- [ ] Unit tests provided for these changes
## Related Work
These changes are related to the following PRs and work-items:
_Note: use !<number> to link to PRs, #<number> to link to work items_
## Other Notes
_if applicable, please note any other fixes or improvements in this PR_
</pre>
<p>As you can see, I've provided a section a the top that provides some guidance on things to do before creating the pull request, such as making sure it has a meaningful name, while the following section provides some prompts to encourage the pull-request author to provide more detail. Your kilometrage will vary, but you may want to work your team to make a template this fits your needs.</p>
<p>Pull request templates can be written in markdown, so it’s possible to include images and tables. My favourite are the checkboxes (<em>- [ ]</em>) which can be marked as completed without having to edit the content. </p>
<h3>Branch Specific Templates</h3>
<p>You may find the need to create templates that are specific to the <strong>target</strong> branch. To do this, create a special folder named “pull_request_template/branches” within one of the same folders mentioned above and create a markdown file with the name of the target branch. For example:</p>
<ul>
<li>.azuredevops/pull_request_template/branches/develop.md</li>
<li>.azuredevops/pull_request_template/branches/release.md</li>
<li>.azuredevops/pull_request_template/branches/master.md</li>
</ul>
<p>When creating your pull-request, Azure DevOps will attempt to find the appropriate template by matching on these templates first. If a match cannot be found, the <em>pull_request_template.md</em> is used as a fallback option.</p>
<p>Ideally, I’d prefer different templates from the <strong>source</strong> branch, as we could provide pull-request guidance for <em>bug/*, feature/*, </em>and <em>hotfix/*</em> branches. However, if we focus on develop, release and master we can support the following scenarios:</p>
<ul>
<li><strong>develop.md: </strong>provide an overview of improvements of a feature, evidence for unit tests and documentation, links to work-items and test-cases<em>, </em>etc</li>
<li><strong>release.md:</strong> provide high-level overview of the items in this release, related dependencies and testing considerations</li>
<li><strong>master.md:</strong> (optional) provide a summary of the release and its related dependencies</li>
</ul>
<ul></ul>
<h3>Additional Templates</h3>
<p>In additional to the branch-specific or default-templates, you can create as many templates as you need. You could create specific templates for critical bug fixes, feature proposals, etc. In this scenario, I’d use that initial (delete-me-section) to educate the user on which template they should use.</p>
<p>You’re obviously not limited to a single template either. If you have multiple templates available, you can mix and match from any of the available templates to fit your needs. Clicking the “add template” simply append the other template to the body of the pull-request.</p>
<p><a href="https://drive.google.com/uc?id=1HRA-HPz2-jajTjRaDblpLSkhzUsOr8Y2"><img title="create-pull-request" style="display: inline; background-image: none;" border="0" alt="create-pull-request" src="https://drive.google.com/uc?id=1-GfYjwfrQpMwGTg5YNm4LsFSRFGv6yFl" width="644" height="452" /></a></p>
<h2>Other observations</h2>
<p>Here’s a few other observations that you might want to consider:</p>
<ul>
<li>If the pull-request contains only a single commit, the name of the pull-request will default to the commit message. The commit message is also appended to the bottom of the pull-request automatically.</li>
<li>If your pull-request contains multiple commits, the name of the pull-request is left empty. The commit messages do not prepopulate into the pull-request, but the “Add commit messages” button appears. The commit messages are added “as-is” to the bottom of the pull-request, regardless where the keyboard cursor is.</li>
</ul>
<h3>Conclusion</h3>
<p>Hopefully this sheds some light on a feature you might not have known existed. In my next post, we’ll look at how we can provide templates for work-items.</p>
<p>Happy Coding!</p>
<em></em>
<ul><strong></strong></ul>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-20241354939577307732020-06-08T08:00:00.002-04:002020-06-08T08:00:00.991-04:00Keeping your Secrets Safe in Azure Pipelines<p>These days, it’s critical that everyone in the delivery team has a security mindset and is vigilant about keeping secrets away from prying eyes. Fortunately, Azure Pipelines have some great features to ensure that your application secrets are not exposed during pipeline execution, but it’s important to adopt some best practices early on to keep things moving smoothly.</p> <h2>Defining Variables</h2> <p>Before we get too far, let’s take a moment to step back and talk about the motivations for variables in Azure Pipelines. We want to use variables for things that might change in the future, but more importantly we want to use variables to prevent secrets like passwords and API Keys from being entered into source control.</p> <p>Variables can be defined in several different places. They can be placed as meta-data for the pipeline, in variable groups, or dynamically in scripts.</p> <h3>Define Variables in Pipelines</h3> <p>Variables can be scoped to a Pipeline. These values, which are defined through the “Variables” button when editing a Pipeline, live as meta-data outside of the YAML file. </p> <p><a href="https://drive.google.com/uc?id=158DpNMuJXv_ZmR575KDJ-tNhnOZs94sp"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1E8HlExJao6yC5tDaf49VkRSYxs5jbv9Z" width="159" height="244" /></a></p> <h3>Define Variables in Variable Groups</h3> <p>Variable Groups are perhaps the most common mechanism to define variables as they can be reused across multiple pipelines within the same project. Variable Groups also support pulling their values from an Azure KeyVault which makes them an ideal mechanism for sharing secrets across projects.</p> <p>Variable Groups are defined in the “Library” section of Azure Pipelines. Variables are simply key/value pairs.</p> <p><a href="https://drive.google.com/uc?id=1l-RehVu5ezqwbbOGtvKD9EF2cGhkWss7"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1KPrTnZwKkShuTijJvjbqOVlPWdsScmcu" width="222" height="244" /></a></p> <p><a href="https://drive.google.com/uc?id=1C8W2_saiyLIhowG2iryt5QSqAx5SqYCr"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=127sXTqdneYdpKm0ri0SIn7AlVkQTrJAc" width="644" height="159" /></a></p> <p>Variables are made available to the Pipeline when it runs, and although there are a few different syntaxes I’m going to focus on using what’s referred to as <em>macro-syntax,</em> which looks like $(VariableName)</p> <pre class="brush: bash; gutter: false; toolbar: false;">variables:
- group: MyVariableGroup
steps:
- bash: |
echo $(USERNAME)
printenv | sort
</pre>
<p>All variables are provided to scripts as Environment Variables. Using <em>printenv</em> dumps the list of environment variables. Both USERNAME and PASSWORD variables are present in the output.</p>
<p><a href="https://drive.google.com/uc?id=1yuvM9-4fiyfHVrEwpT4RqBijJ2-Yzewi"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1JMutQTY2Xt4aEEpBt664Oiubx9K3CUXz" width="644" height="368" /></a></p>
<h3>Define Variables Dynamically in Scripts</h3>
<p>Variables can also be declared using scripts using a special logging syntax.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">- script: |
$token = curl ....
echo "##vso[task.setvariable variable=accesstoken]$token
</pre>
<h2>Defining Secrets</h2>
<p>Clearly, putting a clear text password variable in your pipeline is dangerous because any script in the pipeline has access to it. Fortunately, it’s very easy to lock this down by converting your variable into a secret.</p>
<p><a href="https://drive.google.com/uc?id=1kcoQ24va6OSGu69j5Hf_3gu7b118rzRU"><img title="secrets" style="display: inline; background-image: none;" border="0" alt="secrets" src="https://drive.google.com/uc?id=1MqDuDBmHQGfShaqOHDZtgfypAIktgAiS" width="644" height="127" /></a></p>
<p>Just use the lock icon to set it as a secret and then save the variable group to make it effectively irretrievable. Gandalf would be pleased.</p>
<p><img alt="Why doesn't JWfan have a secure connection? - Other Topics - JOHN ..." src="https://www.jwfan.com/forums/uploads/monthly_2019_05/E998E1F9-9E19-4DFE-B625-89BD9F6595DA.gif.d3f193ba74e61b232e1e715e0af52559.gif" /></p>
<p>Now, when we run the pipeline we can see that the PASSWORD variable is no longer an Environment variable.</p>
<p><a href="https://drive.google.com/uc?id=14N24SBZybUVCN2yf6RKDMRro_MB2pHxH"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=14fIMRKe5820Tzi33CevZ69_IcvPfLi1c" width="644" height="362" /></a></p>
<h3>Securing Dynamic Variables in Scripts</h3>
<p>Secrets can also be declared at runtime using scripts. You should always be mindful as to whether these dynamic variables could be used maliciously if not secured.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">$token = curl ...
echo "##vso[task.setvariable variable=accesstoken;isSecret=true]$token"
</pre>
<h2>Using Secrets in Scripts</h2>
<p>Now that we know that secrets aren’t made available as Environment variables, we have to explicitly provide the value to the script – effectively “opting in” – by mapping the secret to variable that can be used during script execution:</p>
<pre class="brush: bash; gutter: false; toolbar: false;">- script : |
echo The password is: $password
env:
password: $(Password)
</pre>
<p>The above is a wonderful example of heresy, as you should never output secrets to logs. Thankfully, we don't need to worry too much about this because Azure DevOps automatically masks these values before they make it to the log.</p>
<p><a href="https://drive.google.com/uc?id=1gakeaLtwzufUsVIMTMCRTLnxQX2Mmho_"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1FZ5Zka3W3EnO1X-UXMzX11dXTbshjkxz" width="644" height="180" /></a></p>
<h2>Takeaways</h2>
<p>We should all do our part to take security concerns seriously. While it’s important to enable secrets early in your pipeline development to prevent leaking information, doing so will also prevent costly troubleshooting efforts when when variables are converted to secrets.</p>
<p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-48445338585563429002020-06-06T10:29:00.001-04:002020-07-13T13:07:33.147-04:00Downloading Artifacts from YAML Pipelines<p>Azure DevOps multi-stage YAML pipelines are pretty darn cool. You can describe a complex continuous integration pipeline that produces an artifact and then describe the continuous delivery workflow to push that artifact through multiple environments in the same YAML file.</p> <p>In today’s scenario, we’re going to suppose that our quality engineering team is using their own dedicated repository for their automated regression tests. What’s the best way to bring their automated tests into our pipeline? Let’s assume that our test automation team has their own pipeline that compiles their tests and produces an artifact so that we can run these tests with different runtime parameters in different environments.</p> <p>There are several approaches we can use. I’ll describe them from most-generic to most-awesome.</p> <h2>Download from Azure Artifacts</h2> <p>A common DevOps approach that is evangelized in <a href="https://www.amazon.ca/Continuous-Delivery-Reliable-Deployment-Automation/dp/0321601912" target="_blank">Jez Humble’s Continuous Delivery book</a>, is pushing binaries to an artifact repository and using those artifacts in ad-hoc manner in your pipelines. Azure DevOps has Azure Artifacts, which can be used for this purpose, but in my opinion it’s not a great fit. Azure Artifacts are better suited for maven, npm and nuget packages that are consumed as part of the build process. </p> <p>Don’t get me wrong, I’m not calling out a problem with Azure Artifacts that will you require you to find an alternative like <a href="https://jfrog.com/artifactory/" target="_blank">JFrog’s Artifactory</a>, my point is that it’s perhaps too generic. If we dumped our compiled assets into the artifactory, how would our pipeline know which version we should use? And how long should we keep these artifacts around? In my opinion, you’d want better metadata about this artifact, like source commits and build that produced it, and you’d want these artifacts to stick-around only if they’re in use. Although decoupling is advantageous, when you strip something of all semantic meaning you put the onus on something else to remember, and that often leads to manual processes that breakdown…</p> <p>If your artifacts have a predictable version number and you only ever need the latest version, there are tasks for <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/universal-packages?view=azure-devops&tabs=yaml" target="_blank">downloading these types of artifacts</a>. Azure Artifacts refers to these loose files as “Universal Packages”:</p> <pre class="brush: bash; gutter: false; toolbar: false;">- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
command: download
vstsFeed: '<projectName>/<feedName>'
vstsFeedPackage: '<packageName>'
vstsPackageVersion: 1.0.0
downloadDirectory: '$(Build.SourcesDirectory)\someFolder'
</pre>
<h2>Download from Pipeline</h2>
<p>Next up: the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/download-pipeline-artifact?view=azure-devops" target="_blank">DownloadPipelineArtifact</a> task is full featured built-in Task that can download artifacts from different sources, such as an artifact produced in an earlier stage, a different pipeline within the project, or other projects within your ADO Organization. You can even download artifacts from projects in other ADO Organizations if you provide the appropriate Service Connection.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">- task: DownloadPipelineArtifact@2
inputs:
source: 'specific'
project: 'c7233341-a9ff-4e76-9367-909816bcd16g'
pipeline: 1
runVersion: 'latest'
targetPath: '$(Pipeline.Workspace)'
</pre>
<p>Note that if you’re downloading an artifact from a different project, you’ll need to adjust the authorization scope of the build agent. This is found in the <em>Project Settings –> Pipelines : Settings</em>. If this setting is disabled, you’ll need to adjust it at the Organization level first.</p>
<p><a href="https://drive.google.com/uc?id=1ElaXdA0YzxdtaxD3thRa-5XcwVaCWF7u"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1fyNhIqkCzCaxADLZ7CDSigR_5wLjXW3x" width="454" height="40" /></a></p>
<p>This works exactly as you’d expect it to, and the artifacts are downloaded to $(Pipeline.Workspace). Note in the above I’m using the project guid and pipeline id, which are populated by the Pipeline Editor, but you can specify them by their name as well.</p>
<p>My only concern is there isn’t anything that indicates our pipeline is dependent on another project. The pipeline dependency is silently being consumed… which feels sneaky.</p>
<p><a href="https://drive.google.com/uc?id=1PSEPyKDQ9XjCgyAaou_SYjUG79PgttxP"><img title="build_download_without_dependencies" style="display: inline; background-image: none;" border="0" alt="build_download_without_dependencies" src="https://drive.google.com/uc?id=1aNBGwQWDJ5t29wLYkacOgSkp8ogBCR2Z" width="1028" height="148" /></a></p>
<h2>Declared as a Resource</h2>
<p>The technique I’ve recently been using is declaring the pipeline artifact as a resource in the YAML. This makes the pipeline reference much more obvious in the pipeline code and surfaces the dependency in the build summary.</p>
<p>Although this supports the ability to trigger our pipeline when new builds are available, we’ll skip that for now and only download the latest version of the artifact at runtime.</p>
<pre class="brush: bash; gutter: false; toolbar: false;">resources:
pipelines:
- pipeline: my_dependent_project
project: 'ProjectName'
source: PipelineName
branch: master
</pre>
<p>To download artifacts from that pipeline we can use the <em><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#download-for-pipelines" target="_blank">download</a></em> alias for <em>DownloadPipelineArtifact.</em> The syntax is more terse and easier to read. This example downloads the published artifact 'myartifact' from the declared pipeline reference. The <em>download</em> alias doesn’t seem to specify the download location. In this example, the artifact is downloaded to <em>$(Pipeline.Workspace)\my_dependent_project\myartifact</em></p>
<pre class="brush: bash; gutter: false; toolbar: false;">- download: my_dependent_project
artifact: myartifact
</pre>
<p>With this in place, the artifact shows up and change history appears in the build summary.</p>
<p><a href="https://drive.google.com/uc?id=1Nk7grNfA42fzR1rXUGtY8p3wp_jFmHKO"><img title="build_download_with_pipeline_resource" style="display: inline; background-image: none;" border="0" alt="build_download_with_pipeline_resource" src="https://drive.google.com/uc?id=14Q2IyF47q701MqyuFUd9vm3FbaPeoM8p" width="1028" height="152" /></a></p>
<h3>Update: 2020/06/18! Pipelines now appear as Runtime Resources</h3>
<p>At the time this article was written, <a href="https://developercommunity.visualstudio.com/idea/901328/select-artifacts-in-yaml-release-pipeline.html" target="_blank">there was an outstanding defect</a> for referencing pipelines as resources. With this defect resolved, you can now specify the version of the pipeline resource to consume when manually kicking-off a pipeline run.</p>
<ol>
<li>Start a new pipeline run
<br />
<br /><a href="https://drive.google.com/uc?id=1M7hJjyDQ8eV6kgvcohSzCNda-zPXIlXJ"><img title="manually-trigger-build-select-resource" style="display: inline; background-image: none;" border="0" alt="manually-trigger-build-select-resource" src="https://drive.google.com/uc?id=13LG0b1g4GvFGdx-ux9y12EgKev5BrLLr" width="244" height="204" /></a>
<br /></li>
<li>Open the list of resources and select the pipeline resource
<br />
<br /><a href="https://drive.google.com/uc?id=1zwexJayjnm_y_ze38hDHLemGlnEOlVwG"><img title="manually-trigger-build-select-resource-pipeline" style="display: inline; background-image: none;" border="0" alt="manually-trigger-build-select-resource-pipeline" src="https://drive.google.com/uc?id=1J5kYklMfA4QLgYZB9JRKkketrCc-CPy8" width="244" height="109" /></a>
<br /></li>
<li>From the list of available versions, pick the version of the pipeline to use:
<br />
<br /><a href="https://drive.google.com/uc?id=1IWaGmTJQBLm8DA2NPd1sL2MZZ2vR6RRg"><img title="manually-trigger-build-select-resource-pipeline-version" style="display: inline; background-image: none;" border="0" alt="manually-trigger-build-select-resource-pipeline-version" src="https://drive.google.com/uc?id=1GmGaA1slkXGr0QCzauilFDrEVNguJPiz" width="200" height="244" /></a></li>
</ol>
<p>With this capability, we now have full traceability and flexibility to specify which pipeline resource we want!</p>
<h2>Conclusion</h2>
<p>So there you go. Three different ways to consume artifacts.</p>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-28877393275264530812020-02-10T09:48:00.000-05:002020-07-20T12:10:31.528-04:00Challenges with Parallel Tests on Azure DevOps<p>As I wrote about last week, <a href="http://www.bryancook.net/2020/02/adventures-in-code-spelunking.html">Adventures in Code Spelunking</a>, relentlessly digging into problems can be a time-consuming but rewarding task.</p> <p>That post centers around a <a href="https://twitter.com/bcook/status/1219011283829362691" target="_blank">tweet I made while I was struggling with an issue with VSTest on my Azure DevOps Pipeline</a>. I'm feel I'm doing something interesting here: I've associated my automated tests to my test cases and I'm asking the VSTest task to run all the tests in the Plan; this is considerably different than just running the tests that are contained in the test assemblies. The challenge at the time was that the test runner wasn't finding any of my tests. My spelunking exercise revealed that the runner required an array of test suites despite the fact that the user interface restricts you to pick only one. I modified my yaml pipeline to contain a comma-delimited list of suites. Done!</p> <h2>Next challenge, unlocked!</h2> <p>Unfortunately, this would turn out to be a short victory, as I quickly discovered that although the VSTest task was able to find the test cases, the test run would simply hang with no meaningful insight as to why.</p> <pre class="brush: plain">[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
[xUnit.net 00:00:00.52] Discovering: MyTests
[xUnit.net 00:00:00.57] Discovered: MyTests
[xUnit.net 00:00:00.57] Starting: MyTests
-> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
-> Using default config
</pre>
<p>So, on a wild hunch I changed my test plan so that only a single test case was automated, and it worked. What gives?</p>
<h2>Is it me, or you? (it’s probably you)</h2>
<p>The tests work great on my local machine, so it’s easy to fall into a trap that the problem isn’t me. But to truly understand the problem is to be able to recreate it locally. And to do that, I’d need to strip away all the unique elements until I had the most basic setup.</p>
<p>My first assumption was that it might actually be the VSTest runner -- a possible issue with the “Run Test Plan” option I was using. So I modified my build pipeline to just run my unit tests like normal regression tests. And surprisingly, the results were the same. So, maybe it’s my tests.</p>
<p>Under a hunch that I might have a threading deadlock somewhere in my tests, I hunted through my solution looking for rogue asynchronous methods and notorious deadlock maker <em>Task.Result.</em> There were none that I could see. So, maybe there’s a mismatch in the environment setup somehow?</p>
<p>Sure enough, I had some mismatches. My test runner from the command-prompt was an old version. The server build agent was using a different version of the test framework than what I had referenced in my project. After upgrading nuget packages, Visual Studio versions and fixing the pipeline to exactly match my environment – I still was unable to reproduce the problem locally.</p>
<h2>I have a fever, and the only prescription is more logging</h2>
<p>Well, if it’s a deadlock in my code, maybe I can introduce some logging into my tests to put a spotlight on the issue. After some initial futzing around (I’m amazing futzing wasn’t caught by spellcheck, btw), I was unable to get any of these log messages to appear in my output. Maybe xUnit has a setting for this?</p>
<p>Turns out, xUnit has a great logging capability but requires a the magical presence of the <a href="https://xunit.net/docs/configuration-files" target="_blank">xunit.runner.json</a><em></em> file in the working directory.</p>
<pre class="brush: plain">{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"diagnosticMessages": true
}
</pre>
<p>The presence of this file reveals this simple truth:</p>
<pre class="brush: plain">[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
[xUnit.net 00:00:00.52] Discovering: MyTests (method display = ClassAndMethod, method display options = None)
[xUnit.net 00:00:00.57] Discovered: MyTests (found 10 test cases)
[xUnit.net 00:00:00.57] Starting: MyTests <font style="background-color: rgb(255, 255, 0);">(parallel test collection = on, max threads = 8</font>)
-> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
-> Using default config
</pre>
<p>And when compared to the server:</p>
<pre class="brush: plain">[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
[xUnit.net 00:00:00.52] Discovering: MyTests (method display = ClassAndMethod, method display options = None)
[xUnit.net 00:00:00.57] Discovered: MyTests (found 10 test cases)
[xUnit.net 00:00:00.57] Starting: MyTests <font style="background-color: rgb(255, 255, 0);">(parallel test collection = on, max threads = 2</font>)
-> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
-> Using default config
</pre>
<h2>Yes, Virginia, there is a thread contention problem</h2>
<p>The build agent on the server has only 2 virtual CPUs allocated and both executing tests are likely trying to spawn additional threads to perform the asynchronous operations. By setting the <a href="https://xunit.net/docs/configuration-files#maxParallelThreads" target="_blank">maxParallelThreads</a> to “2” I am able to completely reproduce the problem from the server.</p>
<p>I can <a href="https://xunit.net/docs/running-tests-in-parallel" target="_blank">disable parallel execution</a> in the tests by adding the following to the assembly:</p>
<pre class="brush: csharp">[assembly: CollectionBehavior(DisableTestParallelization = true)]</pre>
<p>…or by disabling parallel execution in the xunit.runner.json:</p>
<pre class="brush: plain">{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"diagnosticMessages": true,
"parallelizeTestCollections": false
}
</pre>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-46822239437309500982020-02-07T09:52:00.001-05:002020-07-20T12:10:15.593-04:00Adventures in Code Spelunking<p><a href="https://twitter.com/bcook/status/1219011283829362691"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1yNI82xf8DtkqKRGjc5EFsZNiOHJYlgjc" width="610" height="289" /></a></p> It started innocently enough. I had an Azure DevOps Test Plan that I wanted to associate some automation to. I’d wager that there are only a handful of people on the planet who’d be interested by this, and I’m one of them, but the online walk-throughs from Microsoft’s online documentation seemed compatible with my setup – so why not? So, with some time in my Saturday afternoon and some horrible weather outside, I decided to try it out. And after going through all the motions, my first attempt failed spectacularly with no meaningful errors. <p>I re-read the documentation, verified my setup and it failed a dozen more times. Google and StackOverflow yielded no helpful suggestions. None.</p> <p>It’s the sort of problem that would drive most developers crazy. We’ve grown accustomed to having all the answers a simple search away. Surely others have already had this problem and solved it. But when the oracle of all human knowledge comes back with a fat goose egg you start to worry that we’ve all become a group of truly lazy developers that can only find ready-made code snippets from StackOverflow.</p> <p>When you are faced with this challenge, don’t give up. Don’t throw up your hands and walk away. Surely there’s an answer, and if there isn’t, you can make one. I want to walk you through my process.</p> <h3>Read the logs</h3> <p>If the devil is in the details, surely he’ll be found in the log file. You’ve probably already scanned the logs for obvious errors, it’s okay to go back and look again. If it seems the log file is gibberish at first glance, it often is. But sometimes the log contains some gems that give clues as to what’s missing. Maybe the log warns that a default value is missing, maybe you’ll discover a typo in a parameter.</p> <h3>Read the logs, again</h3> <p>Amp up the verbosity on the logs if possible and try again. Often developers use the verbose logging to diagnose problems that happen in the field, so maybe the hidden detail in the verbose log may reveal further gems.</p> <blockquote> <p><em>Now’s a good moment for some developer insight. Are these log messages helpful? Would someone reading the logs from your program be as delighted or frustrated with the quality of these output messages?</em></p> </blockquote> <p>Keep an eye out for references to class names or methods that appear in the log or stack traces. These could lead to further clues or give you a starting point for the next stage.</p> <h3>Find the source</h3> <p>Microsoft is the largest contributor to open-source projects on Github than anyone else, so it makes sense that they bought them. Just watching the culture shift within Microsoft in the last decade has been astounding and now it seems that almost all of their properties have their source code freely available for public viewing. Some sleuthing may be required to find the right repository. Sometimes it’s as easy as Googling “<name-of-class> github” or following the link on a nuget or maven repository.</p> <p>But once you’ve found the source, you enter a world of magic. Best case scenario, you immediately find the control logic in the code that relates to your problem. Worse case scenario, you learn more about this component than anyone you know. Maybe you’ll discover they parse inputs as case sensitive strings, or some conditional logic requires the presence of a parameter you’re not using.</p> <p>Within Github, your secret weapon is the ability to search within the repository, as you can find the implementation and usages in a single search. Recent changes within Github’s web-interface allows you to navigate through the code by clicking on class and method names – support is limited to specific programming languages but I’ll be in heaven when this capability expands. The point is to find a place to start and keep digging. It’ll seem weird not being able to set a breakpoint and simply run the app, but the ability to mentally trace through the code is invaluable. Practice makes perfect.</p> <p>If you’re lucky, the output from the log file will help guide you. Go back and read it again.</p> <blockquote> <p>As another developer insight – this code might be beautiful or make you want to vomit. Exposure to other approaches can validate and grow your opinions on what makes good software. I encourage all developers to read as much code that isn’t theirs.</p> </blockquote> <p>After spending some time looking at the source, check out their issues list. You might discover your problem is known by a different name that is only familiar to those that wrote it. Alternative suitable workarounds might appear from other problems.</p> <h3>Roadblocks are just obstacles you haven’t overcome</h3> <p>If you hit a roadblock, it helps to step back and think of other ways of looking at the problem. What alternative approaches could you explore? And above all else, never start from a position where you assume everything on your end is correct. Years ago when I worked part-time at the local computer repair shop, I learnt the hard way that the easiest and most blatantly obvious step, checking to see if it was plugged in, was the most important step to not skip. When you keep an open-mind, you will never run out of options.</p> <p>As evidenced by the tweet above, the error message I was experiencing was something that had no corresponding source-code online and all of my problems were baked into a black-box that only exists on the build server when the build runs. When the build runs… on the build server. When the build runs on the build agent… that I can install on my machine. Within minutes of installing a local build agent, I had the mysterious black-box gift wrapped on my machine.</p> <p>No source code? No problem. <a href="https://www.jetbrains.com/decompiler/" target="_blank">JetBrain’s dotPeek</a> is a free utility that allows you to decompile and review any .net executable.</p> <p>Just dig until you hit the next obstacle. Step back, reflect. Dig differently. As I sit in a coffee shop looking out at the harsh cold of our Canadian winter, I reflect that we have it so easy compared to the original pioneers who forged their path here. That’s who you are, a pioneer cutting a path that no one has tread before. It isn’t easy, but the payoff is worth it.</p> <p>Happy coding.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-69520243777005334702020-02-06T13:34:00.000-05:002020-02-06T14:12:12.719-05:00GoodReads 2019 Recap<p>Hey Folks, like all posts that start in January, I’m starting my posts with the traditional …it’s been a while opener. Last year marks a first for this blog where I simply did not blog at all, which feels really strange. The usual suspects apply: busy at work, busy with kids, etc. However, in July of 2018 I started a new habit of taking a break from writing and focusing on reading more. I had planned to read 12 books in 2018 but read 20. Then I planned to read 24 in 2019 but read 41. I’ll probably have finished reading another book while I was writing this.</p> <p>Maybe your New Year’s Resolution is to read more books. So, here are some highlights of book I read last year that you might enjoy:</p> <br /> <h2>The Murderbot Diaries</h2> <p><a href="https://www.amazon.com/dp/B07FK8SNWY?searchxofy=true&ref_=dbs_s_aps_series_rwt" target="_blank"><img alt="The Murderbot Diaries" src="https://images-na.ssl-images-amazon.com/images/I/B140bZAt8eS._SY300_.png" width="384" height="331" /></a></p> <p>Love, love, love Murderbot! By far, my favourite new literary character. <a href="https://www.amazon.com/dp/B07FK8SNWY?searchxofy=true&ref_=dbs_s_aps_series_rwt" target="_blank">The Murderbot diaries</a> is set in the future where mankind has begun to explore planets beyond our solar system. If you were planning on exploring a planet, you’d hire a company to provide you with the assets to get there and as part of that contract, they’d provide you with a security detail to keep <strike>their assets</strike> you safe. Among that security detail is our protagonist, a security android that who has hacked his own governor module so it no longer needs to follow orders. What does a highly dangerous artificial intelligence with computer hacking skills and weapons embedded in its arms do with it’s own free will? Watch downloaded media and pretend to follow your orders. So. freaking. good.</p> <br /> <h2>The Broken Earth Series</h2> <p><a href="https://www.amazon.com/dp/B074CBFX6M?ref_=dbs_s_ks_series_rwt" target="_blank"><img alt="The Broken Earth" src="https://images-na.ssl-images-amazon.com/images/I/B1FxZ1U3ESS._SY300_.png" /></a></p> <p><a href="https://www.amazon.com/dp/B074CBFX6M?ref_=dbs_s_ks_series_rwt" target="_blank">The Fifth Season</a> is strange mix of fantasy meets apocalypse survival, this series is so brilliantly written that I got emotional when it ended. The world-building is vast and revealed appropriately as the story progresses but this attention to creativity does not overwhelm the characters’ depth or story arcs. The world, perhaps our own, is a distant future where history is lost. Artifacts of dead-civilizations, like the crystal obelisks that float aimlessly in the sky have no explanation and every few hundred years, the earth undergoes a geological disaster known as a Season. Seasons may last for years. This one, may last for centuries.</p> <p>Magic exists, but its source is a connection to the earth – an ability to delve, harness and channel the earth’s energy as a destructive force. For obvious reasons, those that are born with this ability are feared and thus rounded up and controlled by a ruling class. Our story involves a woman who secretly hides her ability and her kidnapped daughter who might be more powerful.</p> <h2>Recursion</h2> <p><a href="https://www.amazon.com/Recursion-Novel-Blake-Crouch-ebook/dp/B07HDSHP7N/ref=sr_1_1?keywords=recursion&qid=1581006441&s=digital-text&sr=1-1" target="_blank"><img alt="Recursion: A Novel by [Crouch, Blake]" src="https://images-na.ssl-images-amazon.com/images/I/51HsFY3rRBL.jpg" width="261" height="403" /></a></p> <p>Blake Crouch blew me away in 2018 with <a href="https://www.amazon.com/Dark-Matter-Novel-Blake-Crouch-ebook/dp/B0180T0IUY/ref=pd_sim_351_1/132-7559658-3806759?_encoding=UTF8&pd_rd_i=B0180T0IUY&pd_rd_r=6b37fc98-3410-470d-9cf8-0c102a23f8ac&pd_rd_w=8IwTX&pd_rd_wg=uVfQW&pf_rd_p=39ad6e79-f504-4088-9953-31b1a1ed0061&pf_rd_r=ZYCEA3R54T1NSTPR8E1X&psc=1&refRID=ZYCEA3R54T1NSTPR8E1X" target="_blank">Dark Matter,</a> <a href="https://www.amazon.com/Recursion-Novel-Blake-Crouch-ebook/dp/B07HDSHP7N/ref=sr_1_1?keywords=recursion&qid=1581006441&s=digital-text&sr=1-1" target="_blank">Recursion</a> follows the story of a detective who investigates the suicide of a woman who suffers from a disease that creates a disconnect between their memories and reality. Is it an epidemic or a conspiracy?</p> <br /> <h2>The Southern Reach Trilogy (Annihilation, Authority, Acceptance)</h2> <p><a href="https://www.amazon.com/Area-Three-Book-Bundle-Annihilation-Acceptance-ebook/dp/B012OHXFM6/ref=sr_1_2?crid=5MFY6KRTL2RY&keywords=annihilation+jeff+vandermeer&qid=1581006891&s=digital-text&sprefix=annihilation+jeff%2Cdigital-text%2C173&sr=1-2" target="_blank"><img alt="Area X Three-Book Bundle: Annihilation; Authority; Acceptance (Southern Reach Trilogy) by [VanderMeer, Jeff]" src="https://images-na.ssl-images-amazon.com/images/I/51NEZH4ooZL.jpg" width="257" height="342" /></a></p> <p>I first heard of the book <a href="https://www.amazon.com/Area-Three-Book-Bundle-Annihilation-Acceptance-ebook/dp/B012OHXFM6/ref=sr_1_2?crid=5MFY6KRTL2RY&keywords=annihilation+jeff+vandermeer&qid=1581006891&s=digital-text&sprefix=annihilation+jeff%2Cdigital-text%2C173&sr=1-2" target="_blank">Annihilation</a> from a CBC review of the bizarre and stunning visuals of the <a href="https://www.imdb.com/title/tt2798920/" target="_blank">Annihilation movie starring Natalie Portman</a>. The CBC review of the movie suggested that the director (<a href="https://www.imdb.com/name/nm0307497/?ref_=tt_ov_dr" target="_blank">Alex Garland</a>) started the production of the movie before the 2nd and 3rd book of the series was written. Garland had support from the author, but it’s not <a href="https://www.theverge.com/2018/2/28/17060210/annihilation-alex-garland-film-novel-book-biggest-differences" target="_blank">surprising that the movie’s ending is radically different</a> than the source material. I loved the movie, but needed to understand. The movie is a Kubrik mind-altering attempt to bring an unfilmable novel to the big screen, but the novel is so much more. The plot of the entire movie happens within the first few chapters, so if you liked the film the novel goes much further off the deep end. For example, the psychologist on the exhibition uses hypnosis and suggestive triggers on the rest of the exhibition to force compliance.  It’s not until our protagonist, the biologist, is infected by the effects of Area X does she become immune to the illusion.</p> <p>The insidious aspect is the villain is a mysterious environment with no face, presence or motive. How do you defeat an environment? (spoiler: <span style="color: white;">you can’t.  The invasive species wins (annihilation), the people in charge that are hiding the conspiracy have no idea how to stop it (authority), and the sooner you come to terms with it the better you’ll be (acceptance) – and maybe, given what we’ve done to the environment in the past, we deserve the outcome</span>)</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-15388801646018597992018-03-11T21:38:00.001-04:002018-03-11T21:38:26.553-04:00On Code Reviews<p>Recently, a colleague reached out looking for advise on documentation for code reviews. It was a simple question, like so many others that arrive in my inbox phrased as a “quick question” yet don’t seem to have a “quick answer”. It struck me as odd that we didn’t have a template for this sort of thing. Why would we need this and under what circumstances would a template help?</p> <p>After some careful contemplation, I landed on two scenarios for code review. One definitely needs a template, the other does not.</p> <h3>Detailed Code Analysis</h3> <p>If you’ve been tasked with writing up a detailed analysis of a code-base, I can see benefit for a structured document template. Strangely, I don’t have a template but I’ve done this task many different times. The interesting part of this task is that the need for the document is often to support a business case for change or to provide evidence to squash or validate business concerns. Understanding the driving need for the document will shape how you approach the task. For example, you may be asked to review the code to identify performance improvements. Or perhaps the business has lost confidence in their team’s ability to estimate and they want an outside party to validate that the code follows sound development practices (and isn’t a hot mess).</p> <p>In general, a detailed analysis is usually painted in broad-strokes with high-level findings. Eg, classes with too many dependencies, insufficient error handling, lack of unit tests, insecure coding practices. As some of these can be perceived as the opinion of the author it’s imperative that you have hard evidence to support your findings. This is where tools like <a href="https://www.sonarqube.org/" target="_blank">SonarCube</a> shine as they can highlight design and security flaws, potential defects and even suggest how many hours of technical debt a solution has. Some tools like <a href="https://www.ndepend.com/" target="_blank">NDepend</a> or <a href="https://www.jarchitect.com/" target="_blank">JArchitect</a> allow you to write SQL-like queries to find areas that need the most attention. For example, a query to “find highly used methods that have high cyclomatic complexity and low test coverage” can identify high yield pain points.</p> <p>If I was to have a template for this sort of analysis, it would have:</p> <ul> <li>An executive summary that provides an overview of the analysis and how it was achieved</li> <li>A list of the top 3-5 key concerns where each concern has a short concise paragraph with a focus on the business impact</li> <li>A breakdown of findings in key areas:</li> <ul> <li>Security</li> <li>Operational Support and Diagnostics</li> <li>Performance</li> <li>Maintainability Concerns (code-quality, consistency, test automation and continuous delivery).</li> </ul> </ul> <h3>Peer Code Review </h3> <p>If we’re talking about code-review for a pull-request, my approach is very different. A template might be useful here, but it’s likely less needed.</p> <p>First, <a href="https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis" target="_blank">a number of linting tools</a> such as FXCop, JSLint, etc can be included in the build pipeline so warnings and potential issues with the code are identified during the CI build and can be measured over time. Members of the team that are aware of these rules will call them out where appropriate in a code-reviews or they’ll set targets on reducing these warnings over time.</p> <p>Secondly, it’s best to let the team establish stylistic rules and formatting rather than trying to enforce a standard from above. The reasoning for this should be obvious: code style can be a religious war where there is no right answer. If you set a standard from above, you’ll spend your dying breath policing a codebase where developers don’t agree with you. In the end, consistency throughout the codebase should trump personal preference, so if the team can decide like adults which rules they feel are important then they’re less likely to act like children when reviewing each other’s work.</p> <p>With linting and stylistic concerns out of the way, what should remain is a process that requires all changes to be reviewed by one or more peers, and they should be reading the code to understand what it does or alternative ways that the author hadn’t considered. I’ve always seen code-review as a discussion, rather than policing, which is always more enjoyable.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-39724345835843312342017-10-19T09:04:00.000-04:002017-10-19T09:04:06.528-04:00Xamarin.Forms with Caliburn.Micro walk-through<p>As of this morning, I posted a new version of my <a href="https://marketplace.visualstudio.com/vsgallery/9a4df8ef-3a98-48cb-b9e8-35a7b6d94478" target="_blank">Xamarin.Forms with Caliburn.Micro Starter Kit</a> on the Visual Studio Marketplace.</p> <p>This video provides a quick walk-through of using the template.</p> <iframe height="315" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/mw8arE2bIow?rel=0" frameborder="0" width="560"></iframe>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-33111939535014932082017-10-16T09:03:00.000-04:002017-10-18T16:03:42.162-04:00Jump start your next project with Xamarin.Forms Caliburn.Micro Starter Kit<p>Hey all! I’ve bundled my <a href="http://www.bryancook.net/2017/02/getting-started-with-xamarinforms-and.html" target="_blank">walkthrough of setting up a Xamarin.Forms to use Caliburn.Micro</a> for <a href="http://www.bryancook.net/2017/02/configure-xamarinforms-droid-to-use.html" target="_blank">Android</a>, <a href="http://www.bryancook.net/2017/02/configure-xamarinforms-ios-to-use.html" target="_blank">iOS</a> and <a href="http://www.bryancook.net/2017/02/configuring-xamarinformsuwp-to-use.html" target="_blank">UWP</a> into a Visual Studio Project Template and made it available in the <a href="https://marketplace.visualstudio.com/vsgallery/9a4df8ef-3a98-48cb-b9e8-35a7b6d94478" target="_blank">Visual Studio Extensions Gallery</a>.  </p> <p>You can <a href="https://marketplace.visualstudio.com/vsgallery/9a4df8ef-3a98-48cb-b9e8-35a7b6d94478" target="_blank">download it directly here</a>, or from within Visual Studio: <em>Tools –> Extensions and Updates –> Online.</em></p> <p><strong>Update 10/18/2017:</strong></p> <ul> <li>I had to republish the package as a “Tool” because it includes a few code snippets. The VS Gallery doesn’t allow you to change the classification of the VSIX, so I had to republish under a new identifier. You’ll need to uninstall and reinstall the new template.</li> </ul> <p><a href="https://lh3.googleusercontent.com/-F23WFgkBMhs/WeILRf2LxGI/AAAAAAAABk8/g59QLAXPsT4W661Ew266sRc4nO41cnYFQCHMYCw/s1600-h/image%255B3%255D"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-5-_D_dgatyk/WeILRljibiI/AAAAAAAABlA/_3OK6E2-e8kq0enUq7g6T05gwNrQ6AwdQCHMYCw/image_thumb%255B1%255D?imgmax=800" width="644" height="448" /></a></p> <p>As a multi-project, it’s very straight forward to use, simply choose File –> New and select “Xamarin.Forms with Caliburn.Micro”</p> <p><a href="https://lh3.googleusercontent.com/-ACycGs4J7GE/WeILSPnFn6I/AAAAAAAABlE/1khUEhyD02odyxRN0Ml16Br3PwrovrTsACHMYCw/s1600-h/image%255B7%255D"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-QQwMAW1r8NM/WeILSilhn7I/AAAAAAAABlI/rohFRgoHdGg4af9wkYOEv4Gt6TjCVKpTACHMYCw/image_thumb%255B3%255D?imgmax=800" width="644" height="448" /></a></p> <p>Will create a project with the following structure:</p> <p><a href="https://lh3.googleusercontent.com/-NfWZorTyqtA/WeILS-xQJ3I/AAAAAAAABlM/lkoyPMsSRdQybl3qYhLWJwE3HsIjbaiBACHMYCw/s1600-h/image%255B11%255D"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-Cy23AwoAyIQ/WeILTC9EK4I/AAAAAAAABlQ/v_g7pAyEGPAdAOWyNhyPWV16DgvdeiaowCHMYCw/image_thumb%255B5%255D?imgmax=800" width="288" height="484" /></a></p> <p>Which, when run (on your platform of choosing) looks like this screenshot below. This is right where we left off from my walk-through earlier this year and is a great starting point for prototyping or building your next app.</p> <p><a href="https://lh3.googleusercontent.com/-rcAvL5GkkQQ/WeezlzNkWpI/AAAAAAAABls/TvIOcDwKQIUtU_uRjfUTjJnG2IzffEVngCHMYCw/s1600-h/image%255B4%255D"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-tbUCqCLuGoM/WeeznTFv5DI/AAAAAAAABlw/DJFXIg6-Rks9NpY1PJ7WO-76QcXnztvggCHMYCw/image_thumb%255B1%255D?imgmax=800" width="265" height="484" /></a></p> <h3>Known Issues</h3> <ul> <li>Windows does not automatically create a signing key and identity for your app. Be sure to edit the UWP manifest and associate with your signing identity.</li> </ul> <p>The <a href="https://github.com/bryanbcook/xf.cm.starterkit" target="_blank">source code for the starter kit can be found here</a>. Let me know what you think!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-89266439975154012702017-10-10T09:53:00.000-04:002017-10-10T09:53:00.235-04:00Bundle your Visual Studio Solution as a Multi-Project Template<p>Earlier this year I provided a <a href="http://www.bryancook.net/" target="_blank">walkthrough of setting up a Xamarin.Forms project that leveraged Caliburn.Micro</a> for <a href="http://www.bryancook.net/2017/09/unit-testing-xamarinforms-behaviors.html" target="_blank">Android</a>, <a href="http://www.bryancook.net/2017/02/configure-xamarinforms-ios-to-use.html" target="_blank">iOS</a> and <a href="http://www.bryancook.net/2017/02/configuring-xamarinformsuwp-to-use.html" target="_blank">UWP</a>. I had big plans for extracting the contents of that walkthrough and providing it as a NuGet package. Plans changed however, and I’ve decided to package the entire solution as a Multi-Project Template and provide it as an add-on to Visual Studio (VSIX). This post introduces provides a walk-through on how to create multi-project templates.</p>
<h3>Wait, why not NuGet?</h3>
<p>First off, as an aside, let’s go back and look what I wanted to do. I wanted to provide a starter-kit of files that would jump start your efforts and allow you to modify my provided files as you see fit. As a NuGet package, I can deliver these files to any project simply by adding these loose code files in the <em>content</em> folder of the NuGet package. Two things that are really awesome about this: the code files can be treated as <a href="https://docs.microsoft.com/en-us/nuget/create-packages/source-and-config-file-transformations" target="_blank">source code transforms by changing their extension to *.pp</a>, and through platform targeting I could deliver different content files per platform (Xamarin.iOS10, Xamarin.Android10, uap10.0, etc). With this approach, you would simply create a new Xamarin.Forms project then add the NuGet package to all projects. Bam. Easy.</p>
<p>But there are a few problems with this approach:</p>
<ul>
<li><strong>Existing files.</strong> My NuGet package would certainly be replacing existing files in your solution. I’d want to overwrite key parts of the initial template (<em>App.xaml</em>, <em>AppDelegate</em>, <em>Activity</em>, etc) and in some cases delete files (<em>MainPage.xaml</em>). Technically, I can overcome these side-effects by modifying the project through a NuGet install script (install.ps1). However, you would be prompted during the install about the replacements and if you clicked ‘No’ when prompted to replace these files… my template wouldn’t work.</li>
<li><strong>Delivering Updates.</strong> This is the funny thing about this approach -- it is really intended as a one time deal. You would add the starter files to your project and then begin to modify and extend to your hearts’ content. However, as the package author, no doubt I would find an issue or improvement for the package and publish it. If you were to update the package, it would repeat its initialization process and nuke your customizations. I would prefer not to <a href="https://imgur.com/gallery/KLlgo" target="_blank">see you when you’re angry</a>.</li>
<li><strong>Not guaranteed.</strong> Lastly, you could try and add the NuGet package to only one of your projects, or to a library that isn’t intended as a Xamarin.Forms project.</li>
</ul>
<ul><!--StartFragment-->
</ul><p>Above all else, the NuGet documentation <a href="https://docs.microsoft.com/en-us/nuget/schema/nuspec#including-content-files" target="_blank">clearly states that these files should be treated immutable and
not intended to be modified by the consuming project</a>. And since the best place to add the package is immediately after you create the project using a Visual Studio Template, why not just make a Template?</p>
<h3>Creating a Multi-Project Template</h3>
<p>While Multi-Project Templates have been around for a while, their tooling has improved considerably over the last few releases of Visual Studio. Although there isn’t a feature to export an entire solution as a multi-project, they conceptually work the same way as creating a single project template and then tweaking it slightly.</p>
<p>There are two ways to create a Project Template. The first and easiest is simply to select <em>Project –> Export Template</em>. The wizard that appears will prompt you for a Project and places your template in the <em>My Exported Templates folder.</em></p>
<p>The second approach requires you to install the Visual Studio SDK, which can be found as an option in the initial installer. When you have the SDK installed, you can create a Project Template as an item in your solution. This project includes the necessary vstemplate files and produces the packaged template every time you build.</p><p><a href="https://lh3.googleusercontent.com/-JuWfeKNLO0I/Wdf-tcizQbI/AAAAAAAABkQ/YnvyJmEgJl0ifGWtFEcC1u-8ATmQmprtwCHMYCw/s1600-h/image%255B140%255D"><img width="504" height="351" title="image" style="display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-1kDvNZogNrc/Wdf-t6axfsI/AAAAAAAABkU/3XstP367MfgfkR_7icdhFWXCaGrAfq0OwCHMYCw/image_thumb%255B136%255D?imgmax=800" border="0"></a></p>
<p>Effectively, a Project Template is just a zip file with a .vstemplate file in it. A Multi-Project Template has a single .vstemplate that points to templates in subfolders. Here’s how I created mine:</p>
<h4>1. Create a Project Template project</h4>
<p>Using the Visual Studio SDK, I created a Project Template project to my solution and modified the VSTemplate file with the appropriate details:</p>
<pre class="brush: xml;"><VSTemplate Version="2.0.0" Type="ProjectGroup"
xmlns="http://schemas.microsoft.com/developer/vstemplate/2005">
<TemplateData>
<Name>Xamarin.Forms with Caliburn.Micro</Name>
<Description>Xamarin.Forms project with PCL library.</Description>
<ProjectType>CSharp</ProjectType>
<Icon>_icon.ico</Icon>
<DefaultName>App</DefaultName>
<ProvideDefaultName>true</ProvideDefaultName>
<CreateNewFolder>true</CreateNewFolder>
<RequiredFrameworkVersion>2.0</RequiredFrameworkVersion>
<SortOrder>1000</SortOrder>
<TemplateID>Your ID HERE</TemplateID>
</TemplateData>
<TemplateContent/>
</VSTemplate>
</pre>
<pre class="brush: xml;"></pre>
<h4>2. Export Projects and Add to the Project Template project</h4>
<p>Next, simply export all the projects in your solution that you want to include in your template. The <em>Project –> Export Template</em> dialog looks like this:</p>
<p><a href="https://lh3.googleusercontent.com/-e0EemBUsvNI/Wdf-uGTLb6I/AAAAAAAABkY/UBnk7YFOCzYJeLAj3XteVoKgKrFJYH2owCHMYCw/s1600-h/image%255B135%255D"><img width="504" height="385" title="image" style="display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-anWxV4AZeqI/Wdf-uYyJumI/AAAAAAAABkc/m1dESZJ3nAAqqiMMM-311te7TOCwdZGlwCHMYCw/image_thumb%255B133%255D?imgmax=800" border="0"></a></p>
<p>Once you’ve exported the projects as templates take each of the zip files and extract them into a subfolder of your Template Project. Then, in Visual Studio, include these extracted subfolders as part of the project. Note that Visual Studio will assign a default <em>Action</em> for each file, so code files will be set to <em>Compile</em>, images will be set as <em>EmbeddedResource</em>, etc. You’ll have to go through each of these files and change the default action to <em>Content, copy if newer</em>. It’s a pain, and I found it easier to unload the project and manually edit the csproj file directly.</p>
<h4>3. Configure the Template to include the embedded Projects</h4>
<p>Now that we have the embedded projects included in the output, we need to modify the template to point to these embedded templates. Visual Studio has a set of reserved keywords that can be used in the vstemplate and code transforms; <em>$safeprojectname$</em> is a <a href="https://msdn.microsoft.com/en-us/library/eehb4faa.aspx" target="_blank">reserved keyword</a> that represents the name of the current project. My vstemplate names the referenced templates after the name that was provided by the user: </p>
<pre class="brush: xml;"><VSTemplate Version="2.0.0" Type="ProjectGroup"
xmlns="http://schemas.microsoft.com/developer/vstemplate/2005">
<TemplateData>
...
</TemplateData>
<TemplateContent>
<ProjectCollection>
<ProjectTemplateLink ProjectName="$safeprojectname$" CopyParameters="true">XF\MyTemplate.vstemplate</ProjectTemplateLink>
<ProjectTemplateLink ProjectName="$safeprojectname$.Android" CopyParameters="true">XF.Android\MyTemplate.vstemplate</ProjectTemplateLink>
<ProjectTemplateLink ProjectName="$safeprojectname$.UWP" CopyParameters="true">XF.UWP\MyTemplate.vstemplate</ProjectTemplateLink>
<ProjectTemplateLink ProjectName="$safeprojectname$.iOS" CopyParameters="true">XF.iOS\MyTemplate.vstemplate</ProjectTemplateLink>
</ProjectCollection>
</TemplateContent>
</VSTemplate>
</pre>
<p>If the <em>ProjectName</em> is omitted, it will use the name within the embedded template.</p><h4>4. Fix Project References</h4><p>To ensure the project compiles, we must fix the project references to the PCL library in the iOS, Android and UWP projects. Here we leverage an interesting feature of Multi-Project templates – Visual Studio provides special reserved keywords for accessing properties of the root template project. In this case, we can reference the <em>safeprojectname</em> of the root project using the <em>$ext_safeprojectname$</em> reserved keyword. And because project references use a GUID to refer to the referenced project, we can provide the PCL project with a GUID that will be known to all the child projects – in this case, we can use <em>$ext_guid1$</em>.</p><p>The <em><ProjectGuid></em> element in the PCL Project must be configured to use the shared GUID:</p>
<pre class="brush: xml;"><PropertyGroup>
<MinimumVisualStudioVersion>11.0</MinimumVisualStudioVersion>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{$ext_guid1$}</ProjectGuid>
<OutputType>Library</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>$safeprojectname$</RootNamespace>
<AssemblyName>$safeprojectname$</AssemblyName>
<FileAlignment>512</FileAlignment>
<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
<TargetFrameworkProfile>Profile259</TargetFrameworkProfile>
<ProjectTypeGuids>{786C830F-07A1-408B-BD7F-6EE04809D6DB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
<NuGetPackageImportStamp>
</NuGetPackageImportStamp>
</PropertyGroup>
</pre>
<p>In the projects that reference the PCL, the path to the project, project GUID and Name must be also be modified:</p>
<pre class="brush: xml;"><ItemGroup>
<ProjectReference Include="..\$ext_safeprojectname$\$ext_safeprojectname$.csproj">
<Project>{$ext_guid1$}</Project>
<Name>$ext_projectname$</Name>
</ProjectReference>
</ItemGroup>
</pre>
<h4>5. Fix-ups</h4>
<p>Lastly, there will be some other fix-ups you will need to apply. These are things like original project names that appear in manifest files, etc. The templating engine can make changes to any type of file, but you may need to verify that these files have the <em>ReplaceParameters</em> attribute set to <em>True</em> in the .vstemplate file.</p><h3>Build and Deploy!</h3><p>With this in place, you can simply compile the Project Template and copy the zip to <em>ProjectTemplates</em> folder. Optionally, you can add a VSIX project to the solution that you can use to bundle our Project Template as an installer that you can distribute to users via the Visual Studio Extensions Gallery. </p><p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-24671406510573491062017-09-28T09:54:00.000-04:002017-09-28T09:54:06.900-04:00Unit testing Xamarin.Forms Behaviors<p>So in <a href="http://www.bryancook.net/2017/09/unit-testing-xamarinforms.html" target="_blank">my last post, I outlined how I’m unit testing my Xamarin.Forms projects</a>. Today, I want to highlight a practical example of unit testing a Behavior.</p> <p>Let’s take a simple behavior that prevents item selection in a ListView:</p> <pre class="brush: csharp; gutter: false;">public class DisableListViewSelection : Behavior<ListView>
{
private ListView _attached;
protected override void OnAttachedTo(ListView bindable)
{
_attached = bindable;
if (_attached != null)
{
_attached.ItemSelected += Bindable_ItemSelected;
}
}
protected override void OnDetachingFrom(ListView bindable)
{
if (_attached != null)
{
_attached.ItemSelected -= Bindable_ItemSelected;
}
}
private void Bindable_ItemSelected(object sender, SelectedItemChangedEventArgs e)
{
_attached.SelectedItem = null;
}
}
</pre>
<p>Unit testing the behavior should be straight forward but there are a few gotchas.</p>
<p>The first concern is that we're testing a visual that requires Xamarin.Forms to be initialized using <em>Xamarin.Forms.Forms.Init();</em>. This is easily addressed using the <em><a href="https://www.nuget.org/packages/Xamarin.Forms.Mocks/" target="_blank">Xamarin.Forms.Mocks nuget package</a></em> <a href="http://www.bryancook.net/2017/09/unit-testing-xamarinforms.html" target="_blank">I mentioned in my last post</a>.</p>
<p>The second concern is that the Behavior<T> implementation <a href="https://msdn.microsoft.com/en-us/library/aa288461(v=vs.71).aspx" target="_blank">explicitly implements</a> the <a href="https://github.com/xamarin/Xamarin.Forms/blob/master/Xamarin.Forms.Core/Interactivity/IAttachedObject.cs" target="_blank">IAttachedObject interface which is marked as internal</a><em></em>. We can address this with some Reflection hackery.</p>
<p>I’ve addressed both concerns with the following base test fixture:</p>
<pre class="brush: csharp;">public abstract class BaseBehaviorTests<TSubjectBehavior, TTargetElement> : INotifyPropertyChanged
where TSubjectBehavior : Behavior<TTargetElement>, new()
where TTargetElement : BindableObject, new()
{
private BindingFlags _bindingFlags = BindingFlags.NonPublic | BindingFlags.Instance;
public TTargetElement ContainingElement { get; set; }
public TSubjectBehavior Subject { get; set; }
public event PropertyChangedEventHandler PropertyChanged;
public virtual void Setup()
{
Xamarin.Forms.Mocks.MockForms.Init();
Subject = new TSubjectBehavior();
ContainingElement = new TTargetElement();
}
protected virtual void Attach()
{
Subject.GetType().GetMethod("Xamarin.Forms.IAttachedObject.AttachTo", _bindingFlags).Invoke(Subject, new object[] { ContainingElement });
}
protected virtual void Detach()
{
Subject.GetType().GetMethod("Xamarin.Forms.IAttachedObject.DetachFrom", _bindingFlags).Invoke(Subject, new object[] { ContainingElement });
}
protected void NotifyPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
</pre>
<p>Take a quick peek at the Attach/Detach methods. The <em>IAttachedObject.AttachTo</em> method is explicitly implemented on the class so we have to use the full namespace of the method to resolve it. If it was implicitly implemented, we could simply use "AttachTo".</p>
<p>Now that we have this test fixture capability, writing a test for our <em>DisableListViewSelectionBehavior</em> is dead simple:</p>
<pre class="brush: csharp;">[TestClass]
public class DisableListViewSelectionBehaviorTests : BaseBehaviorTests<DisableListViewSelection, ListView>
{
List<object> _list = new List<object>();
[TestInitialize]
public override void Setup()
{
_list.Add(new object());
base.Setup();
}
[TestMethod]
public void WhenSelectingItem_AndAttachedToBehavior_ShouldUnselectedItem()
{
// arrange
Attach();
ContainingElement.ItemsSource = _list;
// act
ContainingElement.SelectedItem = _list.First();
// assert
ContainingElement.SelectedItem.ShouldBeNull();
}
[TestMethod]
public void WhenSelectingItem_AndDetachedFromBehavior_ShouldKeepSelectedItem()
{
// arrange
Detach();
ContainingElement.ItemsSource = _list;
// act
ContainingElement.SelectedItem = _list.First();
// assert
ContainingElement.SelectedItem.ShouldNotBeNull();
}
}
</pre>
<p>Well, that's all for now. My next post will look at behaviors with data binding.</p>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-11169726007568231542017-09-27T14:28:00.001-04:002017-09-27T14:28:09.749-04:00Unit testing Xamarin.Forms<p>As a TDD Evangelist, I’m well aware of the dichotomy between desired and actual testing practices. It can be hard to write tests when we’re rapid prototyping, and we can convince ourselves that we’re writing testable code, but at some point, you need to establish some testing practices before things scale beyond your reach. This week I’ve been looking at some code where I supported the engineering team but unit testing wasn’t our top priority. I’m glad that I helped shape the code with testability in mind, but now that I’m writing unit tests for the code I’m discovering fallacies in our thinking and areas that are proving difficult to test.</p> <p>Still, I’ve managed to go from 0% to 50% code coverage in about 10 days which is promising. I’m also hopeful that there’ll be lots more code in testable areas so this number should only trend upward. These last few days I’ve been looking at squeezing out a few extra tests for some of the custom behaviors and I quickly discovered that testing code for visual elements is challenging. Here’s a breakdown of how I’m approaching testing for Xamarin.Forms.</p> <h3>Project Setup</h3> <p>First off, since 90% of my logic resides within the shared PCL layer my focus is on writing unit tests for ViewModels, Services and Behaviors. Some services are platform specific and for those I will likely need to test using Xamarin.iOS or Xamarin.Android, but for the PCL layer I’m using MSTest and .NET 4.6. The choice for using a .NET library instead of Xamarin.iOS or Xamarin.Android is largely for convience and speed of running the tests as they don’t require an emulator or device, and I’m also targetting UWP so I have to compile on a windows machine regardless. I also want to leverage a mocking framework like Moq which won’t work correctly on Mono.</p> <h3>Mocking</h3> <p>For unit testing my ViewModels, I wrote a simple extension to Caliburn.Micro’s dependency container that can automatically fill my viewmodels with fake dependencies.</p> <pre class="brush: csharp;collapse: true;">public class TestContainer : SimpleContainer
{
public T CreateSubject<T>()
{
Type targetType = typeof(T);
var greedyConstructor = targetType
.GetConstructors()
.OrderByDescending(i => i.GetParameters().Length)
.FirstOrDefault();
foreach(var arg in greedyConstructor.GetParameters())
{
// handle IEnumerable<T> in constructor
if (typeof(IEnumerable).IsAssignableFrom(arg.ParameterType))
{
var genericType = arg.ParameterType.GenericTypeArguments[0];
// ensure we have at least one item in the array
if (!HasHandler(genericType, null))
{
CreateAndInsertMock(genericType);
}
}
else
{
if (!HasHandler(arg.ParameterType, null))
{
CreateAndInsertMock(arg.ParameterType);
}
}
}
this.PerRequest<T>();
return this.GetInstance<T>();
}
public Mock<T> GetMock<T>() where T : class
{
var obj = this.GetInstance<T>();
if (obj == null)
{
throw new InvalidOperationException("Mock is not directly used by the subject.");
}
return Mock.Get(obj);
}
public Mock<T> GetMock<T>(int index) where T : class
{
var instances = this.GetAllInstances<T>();
return Mock.Get(instances.ToArray()[index]);
}
public Mock<T> AddMock<T>(params Type[] interfaces) where T : class
{
var mock = new Mock<T>();
mock.SetupAllProperties();
if (interfaces.Length > 0)
{
var asMethodInfo = mock.GetType().GetMethod("As");
foreach(var def in interfaces)
{
var method = asMethodInfo.MakeGenericMethod(def);
method.Invoke(mock, null);
}
}
var instance = mock.Object;
RegisterInstance(typeof(T), null, instance);
foreach (var def in interfaces)
{
RegisterHandler(def, null, container => instance);
}
return mock;
}
private Mock CreateAndInsertMock(Type targetType)
{
var method = this.GetType().GetMethod("AddMock").MakeGenericMethod(targetType);
return (Mock)method.Invoke(this, new object[] { new Type[] { } } );
}
}
</pre>
<p>This coupled with a base test fixture really helped to get my viewmodels under the test microscope quickly. </p>
<pre class="brush: csharp;">public abstract class BaseViewModelTest<T> where T : BaseScreen
{
private TestContainer _container;
protected T Subject { get; set; }
public virtual void Setup()
{
_container = new TestContainer();
Subject = _container.CreateSubject<T>();
}
protected Mock<TDependency> Get<TDependency>() where TDependency : class
{
reutnr _container.GetMock<TDependency>();
}
protected Mock<TDependency> Set<TDependency>() where TDependency : class
{
return _container.AddMock<TDependency>();
}
protected void Activate()
{
var activatable = Subject as Caliburn.Micro.IActivate;
if (activatable != null)
{
activatable.Activate();
}
}
}
[TestClass]
public class HomeScreenTests : BaseViewModelTest<HomeScreenViewModel>
{
[TestInitialize]
public override void Setup()
{
base.Setup();
// additional setup
}
// Tests...
}
</pre>
<h3>Testing UI Elements</h3>
<p>As I started writing unit tests for controls and behaviors, I realized that any Xamarin.Forms UI element was going to require the <em>Xamarin.Forms.Forms.Init()</em> method to be invoked, which would not work for my test project. Further investigation revealed that <a href="https://github.com/xamarin/Xamarin.Forms/blob/b9b9d2536ff0cd6ae4526356d663706c721e481f/Xamarin.Forms.Core/IPlatformServices.cs" target="_blank">most of the plumbing within Xamarin is marked as internal</a><em></em> or with the <a href="https://github.com/xamarin/Xamarin.Forms/blob/master/Xamarin.Forms.Core/DeviceInfo.cs" target="_blank">[EditorBrowsable(EditorBrowsableState.Never)]</a><em> </em>which makes it impossible for us to initialize with mocks …<em>externally</em>. The only way to get at these internals is through the <em>InternalsVisibleTo</em> attribute.</p>
<p>Fortunately, <a href="http://jonathanpeppers.com/" target="_blank">Xamarin MVP Jon Peppers</a> <a href="http://jonathanpeppers.com/Blog/mocking-xamarin-forms" target="_blank">has discovered the same issue</a> and realized a small security flaw in Xamarin’s usage of <em>[InternalsVisibleTo]</em>. Since their usage doesn’t require the use of a public key, anyone can access these internals <a href="https://github.com/xamarin/Xamarin.Forms/blob/master/Xamarin.Forms.Core/Properties/AssemblyInfo.cs" target="_blank">if they name their assembly a certain way</a>. Jon has published a <a href="https://www.nuget.org/packages/Xamarin.Forms.Mocks/" target="_blank">nuget package that contains Xamarin.Forms.Core.UnitTests.dll assembly</a>. His dummy versions act as a great stand-in for bypassing the platform dependencies allowing us to write tests for simple functionality instead of platform behavior.</p>
<p>My next few posts will cover a few practical examples.</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-48737986986989450182017-09-18T09:01:00.000-04:002017-09-22T10:15:43.866-04:00Extension methods for Caliburn.Micro SimpleContainer<p>Caliburn.Micro ships with an aptly named basic inversion of control container called <em>SimpleContainer</em>. The container satisfies most scenarios, but I’ve discovered a few minor concerns when registering classes that support more than one interface.</p> <p>Suppose I have a class that implements two interfaces: <em>IApplicationService</em> and <em>IMetricsProvider</em>:</p> <pre class="brush: csharp; toolbar: false;">public class MetricsService : IApplicationService, IMetricsProvider
{
#region IApplicationService
public void Initialize()
{
// initialize metrics...
}
#endregion
#region IMetricsProvider
public void IncrementMetric(string metricName)
{
// do something with metrics...
}
#endregion
}
</pre>
<p>The <em>IApplicationService</em> is a pattern I usually implement where I want to configure a bunch of background services during application startup, and the <em>IMetricsProvider</em> is a class that will be consumed elsewhere in the system. It's not a perfect example, but it'll do for our conversation...</p>
<p>The <em>SimpleContainer</em> implementation doesn't have a good way of registering this class twice without registering them as separate instances. I <u>really</u> want the same instance to be used for both of these interfaces. Typically, to work around this issue, I might do something like this:</p>
<pre class="brush: csharp; toolbar: false;">var container = new SimpleContainer();
container.Singleton<IMetricsProvider,MetricsService>();
var metrics = container.GetInstance<IMetricsProvider>();
container.Instance<IApplicationService>(metrics);
</pre>
<p>This isn't ideal though it will work in trivial examples. Unfortunately, this approach can fail if the class has additional constructor dependencies. In that scenario, the order in which I register and resolve dependencies becomes critical. If you resolve in the wrong order, the container injects null instances.</p>
<p>To work around this issue, here's a simple extension method:</p>
<pre class="brush: csharp;">public static class SimpleContainerExtensions
{
public static SimpleContainerRegistration RegisterSingleton<TImplementation>(this SimpleContainer container, string key = null)
{
container.Singleton<TImplementation>(key);
return new SimpleContainerRegistration(container, typeof(TImplementation), key);
}
class SimpleContainerRegistration
{
private readonly SimpleContainer _container;
private readonly Type _implementationType;
private readonly string _key;
public SimpleContainerRegistration(SimpleContainer container, Type type, string key)
{
_container = container;
_implementationType = type;
_key = key;
}
public SimpleContainerRegistration AlsoAs<TInterface>()
{
container.RegisterHandler(typeof(TInterface), key, container => container.GetInstance(_implementationType, _key));
return this;
}
}
}</pre>
<p>This registers the class as a singleton and allows me to chain additional handlers for each required interface. Like so:</p>
<pre class="brush: csharp;">var container = new SimpleContainer();
container.RegisterSingleton<MetricsService>()
.AlsoAs<IApplicationService>()
.AlsoAs<IMetricsProvider>();
</pre>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0tag:blogger.com,1999:blog-8942599.post-35805789131850208052017-09-05T10:54:00.001-04:002017-09-05T10:58:32.257-04:00Dynamically hiding Cells in a TableView<p>Suppose you have a fixed list of items that you want to display in a Xamarin.Forms <em><a href="https://developer.xamarin.com/guides/xamarin-forms/user-interface/tableview/" target="_blank">TableView</a></em>, but you want some rows and sections of that table to be hidden. Unfortunately, there isn’t a convenient <em>IsVisible</em> property on the <em>TableSection</em> or <em>Cell</em> elements that we can bind to, and the only way to manipulate them is through code. Here’s a quick look on how to collapse our Cells and Sections using an <em>Attached Property</em>.</p> <p>Take this example layout:</p> <pre class="brush: xml; gutter: false;"><ContentPage>
<Grid>
<TableView Intent="Settings">
<TableRoot>
<TableSection Title="Group 1">
<SwitchCell Title="Setting 1" />
<SwitchCell Title="Setting 2" />
</TableSection>
<TableSection Title="Group 2">
<SwitchCell Title="Setting 3" />
<SwitchCell Title="Setting 4" />
</TableSection>
</TableRoot>
</TableView>
</Grid>
</ContentPage></pre>
<p>In the above, I have two <em>TableSection</em> elements that contain two very simple <em>SwitchCell</em> elements. A lot of the detail has been omitted for clarity, but let's assume that I want the ability to only show certain settings to the user. Perhaps each setting is controlled by a special backend entitlement logic, and I want to bind that visibility for each cell through my ViewModel.</p>
<p>We'll create an attached property that can hide our cells:</p>
<pre class="brush: csharp; gutter: false;">public class CellEx
{
public static BindableProperty CollapsedProperty =
BindableProperty.CreateAttached(
"Collapsed",
typeof(bool?),
typeof(CellEx),
default(bool?),
defaultBindingMode: BindingMode.OneWay,
propertyChanged: OnCollapsedChanged);
public static bool GetCollapsed(BindableObject target)
{
return (bool)target.GetValue(CollapsedProperty);
}
public static void SetCollapsed(BindableObject target, bool value)
{
target.SetValue(CollapsedProperty, value);
}
private static void OnCollapsedChanged(BindableObject sender, object oldValue, object newValue)
{
// do work with cell
}
}</pre>
<p>The <em>OnCollapsedChanged</em> event handler is called when the bound value of the BindableProperty is first set and we’ll use it to obtain a reference to the <em>Cell.</em> As the binding is potentially invoked before the UI is fully initialized we’ll need to defer our changes until it’s ready:</p>
<pre class="brush: csharp; gutter: false;">private static void OnCollapsedChanged(BindableObject sender, object oldValue, object newValue)
{
var view = sender as Cell;
bool isVisible = (bool)newValue;
if (view != null)
{
// the parent isn't available until the page has loaded.
if (view.Parent == null)
{
view.Appearing += (o,e) =>
{
ToggleViewCellCollapsedState(view, isVisible);
};
}
else
{
ToggleViewCellCollapsedState(view, isVisible);
}
}
}</pre>
<p>Once we have an initialized <em>Cell</em>, we need to obtain a reference to the containing <em>TableSection</em>. As a twist, the <em>Parent</em> of the <em>Cell</em> is the root <em>TableView</em>, so we must traverse the entire table downward to find the correct <em>TableSection</em>. Since a <em>TableView</em> only contains a fixed list of cells, scanning the entire table shouldn't be too troublesome at all:</p>
<pre class="brush: csharp; gutter: false;">private static void ToggleViewCellCollapsedState(Cell cell, bool isVisible)
{
var table = (TableView)cell.Parent;
TableSection container = FindContainingTableSection(table, cell);
if (container != null)
{
if (!isVisible)
{
// do work to hide cell
}
}
}
private static TableSection FindContainingTableSection(TableView table, Cell cell)
{
foreach(var section in table.Root)
{
foreach(var child in section)
{
if (child == cell)
{
return section;
}
}
}
return null;
}</pre>
<p>Lastly, once we've obtained the necessary references, we can simply manipulate the <em>TableSection</em> contents. To ensure this works on all platforms, this code must execute on the UI thread:</p>
<pre class="brush: csharp; gutter: false;">if (!isVisible)
{
Device.BeginInvokeOnMainThread(() =>
{
// remove the cell from the section
container.Remove(cell);
// remove the section from the table if it's empty
if (container.Count == 0)
{
table.Root.Remove(container);
}
});
}</pre>
<p>We can then bind the visibility of our cells to the attached property:</p>
<pre class="brush: xml; gutter: false; toolbar: false;"><ContentPage
xmlns:ex="clr-namespace:MyNamespace.Behaviors"
>
<Grid>
<TableView>
<TableRoot>
<TableSection Title="Group 1">
<SwitchCell Title="Setting 1" ex:CellEx.Collapsed={Binding IsSetting1Visible}" />
<SwitchCell Title="Setting 2" ex:CellEx.Collapsed={Binding IsSetting2Visible}" />
</TableSection>
<TableSection Title="Group 2">
<SwitchCell Title="Setting 3" ex:CellEx.Collapsed={Binding IsSetting3Visible}" />
<SwitchCell Title="Setting 4" ex:CellEx.Collapsed={Binding IsSetting4Visible}" />
</TableSection>
</TableRoot>
</TableView>
</Grid>
</ContentPage></pre>
<p>This works great and can dynamically hide individual cells or an entire section if needed. The largest caveat to this approach is that it <em>only hides</em> cells and won’t re-introduce them into the view when the binding changes. This is entirely plausible as you could cache the cells in a local variable and re-insert them programmatically, but you’d need to remember the containing section and appropriate indexes. I’d leave that to you dear reader, or I may rise to the challenge if I determine I really want to re-activate these cells in my app.</p>
<p>Happy coding!</p>bryanhttp://www.blogger.com/profile/01332614158223702009noreply@blogger.com0