I've been using Copilot to help out with general mundane tasks for a while now, with the VS Code extensions but also with GitHub Copilot where using it to providing summaries for Pull Requests, general code reviews etc can be really useful for providing quick feedback without any effort.
But, what about those of us using Azure DevOps Repos? Microsoft have been concentrating on GitHub for some time, but migrating a mass of code and pipelines and processes from ADO to GitHub is not a trivial task. So I decided to have a play around and see what can be done with a basic pipeline, simple Git commands and Azure AI Foundry.
Hopefully what follows will be of some use to others.
The High Level
There are probably lots of ways this could be done, but my thought was to have a pipeline template that packages up the changes, via a 'git diff' and send the results to an AI Agent in Foundry using the API.
The pipeline is simple, it triggers when a Pull Request is created, runs a 'git diff' on the repo to produce a summary of the changes and then sends this summary of changes to an Azure AI Foundry Agent using the REST API.
The agent is instructed to produce a summary of the changes using its set of instructions. The exact instructions here can be as simple or as broad as you like. The instructions I use can be viewed in the code that I'll post a link to at the end of this article.

Hopefully the diagram helps, as you can see when we raise a PR, this triggers a pipeline that runs a 'git diff' on the repo and sends the result to an AI Agent in Azure foundry. The agent I'll be using is configured with a very specific task, it is deliberately not a broad 'jack of all trades'. It reads the git diff, looks at the changes, and provides a summary based on things like good coding standards, security, and general information. I don't want this agent doing anything other than that. The intention is that when we have other use cases, we will create a new dedicated agent for that specific task.
The Code
So where is the code to deploy this? Well, in this GitHub repo HERE. I'll run through a simple deployment in the following paragraphs, it's pretty basic so there should be no major problems for anyone who has knowledge of Azure DevOps YAML pipelines, tiny bit of Python and a tiny bit of knowledge of LLMs.
Requirements
To deploy this, you will need:
- An Azure Foundry Project
- Python 3.11+
- Git command line
- Azure DevOps Project for running the pipeline
- AZ CLi
First step then is to clone the repo. The python scripts in the repo are used to create the Agent in Azure Foundry with some examples of how to interact with the agent using Python too.
AI Instructions
If you review the devops-agent.py file, it should be pretty obvious how you can set the instructions for your agent. The script has a variable named 'instructions', this is the set of instructions where you tell your agent what you want it to do. This could be anything, but have a think about what you want this particular agent to do. I've been pretty broad but within the basic remit of "write a summary for a PR".
instructions="""
You are a helpful, senior devops specialist, you have expertise in code security, best practices, clean code and documentation.
When reviewing code, you focus on library versions being used, to make sure they are not out of date.
When reviewing Terraform, check that the provider major versions are the latest available.
You will then write a summary of the code submitted, highlighting key areas to focus on such as security, out of date libraries and providers, and any other improvements deemed necessary.
"""Exactly what you put here is up to you, but some common components of an instruction set include:
- Persona/Role — Could be as simple as "You are a senior Software engineer". Here I am using this one agent fr multiple code reviews (but you may want to create one agent for different languages so you can create very specific context).
- Context — What should the agent focus on. Are there specific areas of concern? Is it mainly code formatting standards, security, particular languages? Here, I am interested to make sure we aren't using out of date libraries and providers, but should alos highlight any glaring security concerns.
- Constraints — What the Agent should not do.
- Tools that can be used — I've not added anything in this area, but this could be to use any useful documentation linked to the agent in Foundry, company data…
- Specific tasks — This the agent is only doing one specific task, but you could add in a set order of things you want the agent to do too.
- Output format — You may want to instruct your agent to keep answers short, or that it should be in Markdown format, or a particular tone or style.
Deploying the Agent
The first step will be to create an Azure AI Foundry project. I won't go into the steps here, as there are plenty of write ups on the web. Here's the Microsoft Quickstart to follow HERE.
Once done, clone the repo HERE so that you have all the scripts, pipelines and some code to test with.
The GitHub repo has steps that describes how to deploy an agent using the Python scripts, but here's a summary:
- Save the Foundry Project endpoint URL in an environment variable named `PROJECT_ENDPOINT`.
- Specify the Model you wish to use for your agent in an environment variable named
MODEL_DEPLOYMENT_NAME. I'm usinggpt-4.1mini. - Login to your Azure tenant with AZ Cli (
az login). - With the env vars set, run the
devops-agent.pypython script, this should return the version and ID of the agent when it's been created.
Agent created (id: devops:1, name: devops, version: 1)`Submitting code for review
With your agent now created, it's a matter of sending it some code to review. There is a script in the repo that can do this using the Foundry API, look for the Python script ask-devops.py. This script will read content from a file named changes.txt, the intention is that this file contains the results of a git diff command ( git diff main $branch > ./changes.txt ).
When submitted to the agent, the agent will provide a summary of the code changes it has been sent. If the git diff is only a small change, then it's not going to hav much to report on, so don't expect a super exciting, ground breaking summary.
Within the repo you cloned, there is an example app that you can use to demonstrate the type of reports the agent will produce. This demo app can be found in the ./examples directory. Here's some steps you can follow to perform this locally without a pipeline.
- Create a new local Git repo by creating a directory on your system, change into this directory and run `git init`. Add a file such as
readme.mdso that git creates a main branch. Once the file is created, rungit add .andgit commit -am "initial"(feel free to use whatever commit message you like). - Still in this directory, create a new branch where the code to be reviewed will be committed,
git branch feature/initialand thengit switch feature/initial. You are now operating within the branch. - Add files to the feature/initial branch. In the examples directory from the cloned repo, there is a directory named
azure_linux_docker_app_service. This app is intended for testing the agent only, it is not to be deployed and contains some deliberate flaws so we can see what the agent suggests. Copy the contents of this directory, and save to the local Git repo you created. - Run
git add .(so git tracks the content), then rungit commit -am "initial, this will commit the files to the feature branch of your local git repo.
With the feature branch now full of content, and your main branch still empty, you can run a git diff and send the results to your agent for review.
- In the same directory, run
git diff main feature/initial > changes.txt. This returns the changes between your feature branch, and the main branch and saves the output to the changes.txt file. As the main branch is empty, there will be a lot of changes between the two branches.
We now have everything we need to submit this to the agent for review, the ask-devops.py can be used to read the changes.txt file and submit to the devops agent using the foundry API.
- There are some Python libraries required to run this script, and the best practice is to install these in a Python virtual environment. Once you have done this, run the ask-devops.py script. Note, by default this script looks for the changes.txt file in the same directory that the script is run from. You will also need to add an environment variable for the foundry endpoint (
PROJECT_ENDPOINT). - The script will read the content of changes.txt, submit this to the agent, and the agent will perform the instructions it has been given. As this git diff is comparing a full app against an empty main branch, the 'summary' will be relatively long. Here's the returned content from the agent.
Response output: ### Code Review Summary
The submitted code includes multiple files to provision a Node.js app hosted on an Azure Linux App Service using Terraform, Docker, and Azure monitoring/security services. Here's a detailed review emphasizing security, library versions, and improvements:
---
### 1. **Terraform Provider Versions**
- **Problem**: The provider version in `provider.tf` is pinned to **`azurerm = "= 2.37.0"`**, which is outdated.
- **Recommended Improvement**: Upgrade to the latest stable **`v4.x.x`** version of the AzureRM provider. The latest major versions add security and functional improvements, and often simplify management (e.g., improved support for managed identities and monitoring).
- Example:
```hcl
provider "azurerm" {
version = "~> 4.0"
features {}
}
```
- The `random` and `template` providers seem fine but verify if newer versions are available.
---
### 2. **Node.js Dockerfile and Application Libraries**
- **Node.js Base Image**:
- `FROM node:14-slim` is used.
- Node 14.x is in **maintenance mode** and nearing or past end-of-life (EOL).
- **Recommendation**: Upgrade to Node.js 16 LTS or 18 LTS slim base images for better security and performance.
- E.g. `FROM node:18-slim`
- **NPM Dependencies**:
- `applicationinsights` at version `^1.8.8` (package.json) is somewhat old. Latest stable is around `~1.8.10+`.
- Check for any vulnerabilities by running `npm audit` regularly.
- Regular updates ensure telemetry data is sent securely and bugs fixed.
- **Docker Build Best Practices**:
- Using `npm ci` is good for clean install from lock file.
- Consider multi-stage Docker builds if image size or build complexity grows.
---
### 3. **Security Best Practices**
- **Key Vault Access**:
- Access policies provide:
- The client principal with get/set/delete on secrets.
- App Service (and deployment slot) with only **get** permissions.
- This follows **least privilege principle**, which is good.
- Validate that the client principal (likely the Terraform runner) does not have excessive rights beyond what's needed.
- **Managed Identities**:
- System-assigned managed identities are correctly used for App Service and slots.
- The role assignment grants only **AcrPull** role on Container Registry – minimal needed for pulling container images.
- Ensure no elevated roles accidentally assigned to managed identities.
- **Secret Storage and Injection**:
- Application Insights instrumentation key stored securely in Key Vault and referenced in App Service settings via Key Vault references.
- Ensure any other sensitive data also follows this pattern.
- Review if database credentials (referenced in the Node.js app) are handled securely using environment variables or Key Vault references.
- **TLS Settings**:
- Minimum TLS version set to 1.2 on App Service – good security practice.
- **Log Retention**:
- HTTP logs retention configured (7 days, 100 MB).
- Monitor log rotation and storage costs as per your organizational needs.
- **Health Checks & Monitoring**:
- Health check path configured and monitored via Azure Application Insights Web Tests.
- Email alerts configured for availability, HTTP 5xx errors, and dependency failures.
- This proactive monitoring is a strong operational best practice.
---
### 4. **Terraform Code Quality**
- The Terraform code is well structured and modularized:
- Separate files for `app_service`, `access_policies`, `monitoring`, `secrets`, `shared`, and `variables`.
- Use of locals and meaningful variable defaults.
- Comments explaining settings help maintainability.
- `lifecycle.ignore_changes` is used to prevent redeployment when Docker image name or scm type changes — this is a good practice for deployment stability.
- Random suffix generation for globally unique resources like Key Vault and ACR is implemented correctly.
- Outputs expose key resource identifiers (App Service name, URL, Container Registry), aiding integration and debugging.
---
### 5. **Other Observations**
- **Scripts**:
- `build-and-push.sh` and `deploy.sh` are simple and effective for CI/CD.
- Consider adding error handling in these scripts (e.g., check exit codes).
- Passing sensitive information (like ACR credentials) is done via managed identity or Azure CLI login, which is good.
- **Node.js App Code** (`index.js`):
- Basic HTTP server with Application Insights telemetry integrated.
- Graceful error handling and process exit on missing config or db connect failures.
- You use environment variables for config—make sure secrets are injected securely at runtime.
- The DB connection snippet appears to use Azure AD MSI auth which is good for security but double-check if `sql` library is imported (not shown in the snippet).
---
### Summary & Recommendations
| Area | Current State | Recommendation |
|------------------------------|------------------------------------------------------------------|-------------------------------------------------|
| AzureRM Provider Version | Version 2.37.0 (outdated) | Upgrade to latest 4.x version |
| Node.js Base Image | Node 14-slim (EOL approaching or passed) | Use Node 16 or Node 18 LTS slim |
| NPM Dependencies | `applicationinsights@1.8.8` | Check for latest stable and update regularly |
| Key Vault Access | Appropriate least privilege access policies | Continuously validate minimal permissions |
| Managed Identities | Used for ACR access and key vault referencing | Confirm no excess privileged role assignments |
| TLS & Security Settings | TLS 1.2 minimum, logs retention configured | Maintain and enforce updated secure protocols |
| Terraform Code Quality | Clean, modular, comments, lifecycle management | Continue best practices |
| CI/CD Scripts | Simple scripts present, no error handling | Add error handling and security checks |
| Node.js App | Basic app with Insights, MSI AD authentication | Review error handling, code completeness, logging|
---The eagle eyed amongst you will note that the report doesn't pick up specific vulnerabilities and some of the recommendations aren't perfect. This is where adding context, better knowledge into the agent or even using a more advanced Model, can improve the reports. Remember, the Agent is not intended to be a substitute for running security scans/SAST against your code.
Bundling this into a pipeline template
So all great, but the aim is to get this summary written into the PR comments as part of a pipeline.
For this, we need:
- Azure DevOps Project
- A repo for pipeline templates
In your Azure Devops project, create a repo to store the pipeline template. The template available in the Git repo under pipeline-templates/git-diff.yml should be saved in this repo.

Create a new repo for your application that you will be developing, use the same example from the local test above if you wish. In this new repo, add a new pipeline that will be triggered upon a new PR being raised, this will call the git-diff.yml template and send the git diff contents to the Foundry Agent.
The pipeline to trigger upon a PR is quite simple, all the functionaloty is in the template:
# ============================================
# PR Analysis Template - ask-devops
# ============================================
trigger: none
pr:
branches:
include:
- main
resources:
repositories:
- repository: pipelines
type: git
name: iac/pipelines
ref: refs/heads/main
jobs:
- template: jobs/git-diff.yml@pipelines
parameters:
foundryEndpoint: https://andrew-8114-resource.services.ai.azure.com/api/projects/andrew-8114Create the pipeline
So, we now have all the code in the git repos we need, we just need to initiate the pipeline and raise a PR.
- In the Azure pipelines, click
new pipelineand select the option for an existing yaml pipeline in the Azure git repo.

And thats it. Now, when you raise a PR following a change to the code, the piepline will automatically run a git diff and send it to the Azure DevOps Agent for review. Here's a brief example with a couple of daft changes to min_tls version and https_only in the terraform config. The idea here is to show that the agent will pick up this flawed change.
For example, from this:
resource "azurerm_app_service" "current" {
name = local.app_service_name
location = data.azurerm_resource_group.current.location
resource_group_name = data.azurerm_resource_group.current.name
app_service_plan_id = azurerm_app_service_plan.current.id
https_only = trueto this:
resource "azurerm_app_service" "current" {
name = local.app_service_name
location = data.azurerm_resource_group.current.location
resource_group_name = data.azurerm_resource_group.current.name
app_service_plan_id = azurerm_app_service_plan.current.id
https_only = falseHaving made these daft changes, and raising a PR, the agent creates a summary and saves this to the comments of the PR.

Next Steps
This is just a starting point, there are clear improvements that can be made here. The obvious one would be to give it access to better knowledge and make it aware of context of the environment it is operating in. This could be internal documentation to define internal standards for example. The instructions could also be further fine tuned, to what you are interested in. Maybe you just want just 3 or 4 sentences, and nothing more. This definitely is just a starting point, not 'job done'.
With the basic framework and tooling set up to create these agents, and how simple it is, it should be possible to expand the use of these agents to ease the load on Devs and Platform Engineers. Helping the wider teams to concentrate on the strategic and value add stuff, not simply mopping up spills.
Let me know what you're using AI Agents for in your pipelines for more ideas.
Thanks