All of a sudden, our TFS server down. When I checked in IIS, http://localhost:8080/tfs is giving 503 service error.
Below application pools are getting stopped automatically, even though I manually made them start in IIS manager. Microsoft Team Foundation Server Application Pool, Microsoft Team Foundation Server Proxy Application Pool
Visual Studio Team Foundation Server 2013 is the source-code-control, project-management, and team-collaboration platform at the core of the Microsoft suite of Application Lifecycle Management (ALM) tools, which help teams be more agile, collaborate more effectively, and deliver quality software more consistently. Azure DevOps Server Express 2019 Update 1 is the latest update for Azure DevOps Server Express 2019. Azure DevOps Server Express is a free, source code-control platform for individual developers and small teams of five or less. Please see the Release Notes for more information.
Please guide me to solve this issue
This problem occurred due to network problem. Below steps solved the issue.
Thank you for your replies.
c#,asp.net,iis,websocket
The problem was with web.config. I added <httpRuntime targetFramework='4.5.1' /> to system.web section and it finally began to work...
c#,asp.net,iis
There are several domain providers like: godaddy, name etc you can use to buy a domain name. These providers also provide you steps to map the domain name to your website. Check out this link for example. This link explains domain name configuration in details.
visual-studio-2013,tfs,disaster-recovery,tfvc
This is easiest solved by: Creating a new workspace, make sure it's a local workspace, on a new location. Get the same version that is your base version using Get Specific Version Deleting its contents (while retaining the tf$ folder) Pasting your old solution with updates over the one you...
visual-studio-2013,tfs,tfvc
Not sure if you're even using the data binding features which the .datasource file is generated for, but turning that off in your service reference configuration by manually editing the .svcmap file would solve your problem. After editing make sure you use the Update Reference feature to get rid of...
javascript,jquery,ajax,angularjs,iis
Without seeing your server-side code it is difficult to say if that is a problem. The JS you have presented generally looks okay. You say that it runs without errors but fails to produce the expected results on the server. So I suggest you add the server-side code. Both cases...
vb.net,powershell,tfs,tfsbuild,tfs2013
You can access the download zip via a properly constructed URL. For example: https://{AccountName}.visualstudio.com/DefaultCollection/{TeamProject}/_apis/build/builds/{BuildId}/artifacts/drop?%24format=zip ...
iis,tfs
This problem occurred due to network problem. Below steps solved the issue. I changed the server from domain to workgroup and restarted the machine. Again I changed from workgroup to domain and restarted the machine. Goto IIS manager->Application Pools->Right click on Microsoft Team Foundation Server Application Pool->Advance Settings Under Process...
tfs,tfs2013
You can integrate with uservoice to do this. There is a feature called service hooks that allows you to trigger the integration with them and it allows you to do the same as http://visualstudio.uservoice.com You could also extend TFS to do it yourself with a simple voting webpage and a...
asp.net,iis
First verify that the World Wide Web Publishing Service is installed and not disabled. [Source:MSDN] Right-click My Computer on the desktop, and then click Manage. Expand the Services and Applications node, and then click the Services node. In the right pane, locate the World Wide Web Publishing Service. If the...
c#,iis,.net-4.5,web-api
Using Task<T>.Result is the equivalent of Wait which will perform a synchronous block on the thread. Having async methods on the WebApi and then having all the callers synchronously blocking them effectively makes the WebApi method synchronous. Under load you will deadlock if the number of simultaneous Waits exceeds the...
c#,asp.net,wcf,iis
It seems that it IS possible after all, and I now have it working. Basically, the only reason I wasn't able to share the port between IIS and my WCF service was that I was specifying the 'Host Name' in the IIS bindings. As soon as I removed the host...
.net,windows,iis,kerberos
Solved. Eset NOD32 Antivirus version 4 was modifying HTML authorization headers on some computers. After disabling Web access protection everything works like a charm.
visual-studio,tfs,build,msbuild
Without modifying the build process template? No. Even then, the build number is set prior to compilation (or, IIRC, even syncing the code from source control), so you're in for a wild ride trying to get the behavior you want.
asp.net,iis,hosting,web-deployment
Good afternoon Ramesh! If I understand your question correctly you currently have 3 separate web roots and you want to use these as separate web applications that will be served to users based on geography in some way. You also want to maintain individual web configuration files for each as...
iis,iis-7,iis-7.5,appcmd
Good afternoon mark! To display the physical path using only the app object you can reference it using the [path='string'] syntax. Using this you can reference all of the properties of the nested VirtualDirectory object. So for your example you would use the following command: &$a list app 'MyDayforce/' /text:[path='/'].physicalPath...
c#,asp.net,iis,visual-studio-2013,iis-express
Let's look at the code that throws: private void ValidateRequestEntityLength() { if (!this._disableMaxRequestLength && (this.Length > this._maxRequestLength)) { if (!(this._context.WorkerRequest is IIS7WorkerRequest)) { this._context.Response.CloseConnectionAfterError(); } throw new HttpException(SR.GetString('Max_request_length_exceeded'), null, 0xbbc); } } The ctor sets those options: internal HttpBufferlessInputStream(HttpContext context, bool persistEntityBody, bool disableMaxRequestLength) { this._context = context;...
.net,tfs,tfsbuild,tfs2013
you can use the /ToolsVersion switch you'll need to go in to the Advanced settings of your build process and add the switch as an MSBuild argument. /ToolsVersion:2.0...
visual-studio,visual-studio-2013,tfs,tfs2013,tfvc
Use Compare... and select Latest Version. That's best executed from the commandline or the Source Control Explorer. If you compare 'Latest Version' (remote) with 'Workspace version' (local), then it'll tell you what has changes on the server since the last get-latest. If you compare 'Latest version (remote) with 'Latest version'...
image,git,tfs,git-push
The problem occurs with msysgit and curl in the current version. There's a problem with handling authentication over HTTPS: Documented here: https://github.com/msysgit/git/issues/349 Solution: Install the pre-release of Git for Windows 2.x...
visual-studio-2012,tfs,tfs-sdk
I guess you are using the wrong link type. You should not use the WorkItemLinks property nor the WorkItemLink class. Instantiate an ExternalLink object and add it to the WorkItem.Links collection instead. You can find sample code at TFS2010: How to link a WorkItem to a ChangeSet....
asp.net,iis,memory,console-application
The application pool (and yes IIS Express even has these) for the site that your .aspx page is running in is probably configured for 32 bit mode which is why it's returning 4GB and 3.3GB respectively. Being a 32 bit process that's all it can see. If you're running this...
asp.net,iis,web-config,windows-authentication,global-asax
If you are using Windows authentication then you should keep this in mind- If you are manually enabling windows authentication in IIS the please do not include the code below in your web.config <authentication mode='Windows' /> if you use this, it will cause the same problem as I stated above...
c#,tfs
Yes, both TFS and VSO provide the same capabilities for developers working together to build a single product. https://www.visualstudio.com/features/version-control-vs You would be best using Git if you are just starting out, or TFVC if you already have a large legacy codebase. You would also be best using VSO (TFS Online)...
tfs,tfs2013,tfvc
Visual Studio 2015 will offer something like that in the IDE as part of Code Lens when you're using a Git repository. For TFVC it's possible to construct a report like this based on the Code Churn dimension in the data warehouse, but there is no out-of-the-box report that visualizes...
asp.net,asp.net-mvc,iis
Visual studio itself provides an option to create the Virtual Directories and mappings into IIS. Please follow step1 and step2, may be it helps If I have got the question correct. ![STEP 1][1] 1. Right Click Project in Visual Studio 2. select properties ![Step2][2] 1. Select option 'Web' in left...
visual-studio-2013,tfs,tfs2013
This is not possible by default using Visual Studio, but when you shelve your changes into a shelveset you can move that one to the other branche by using the TFS power tools. The command you need is: tfpt unshelve shelvsetName /migrate /source:$/SourceBranch /target:$/TargetBranch You can find the TFS Power...
c#,.net,iis,file-permissions
You need to enable impersonation. See this link https://technet.microsoft.com/en-au/library/cc730708(v=ws.10).aspx
visual-studio-2012,tfs,nuget
As of NuGet v2.7, MSBuild-Integrated Package Restore has been deprecated and replaced with Automatic Package Restore. See documentation on how to migrate your solution to the new feature: Migrating MSBuild-Integrated solutions to use Automatic Package Restore To learn more about Automatic Package Restore see NuGet Package Restore Both articles discuss...
php,apache,symfony2,session,iis
Apparently setting session cookie domains does not work for top level domains (TLD), like .dev Changed my code to: ini_set('session.cookie_domain', '.local.dev'); and now I am able to set session variables on .local.dev website and read them in new.local.dev Both apps are physically in separate folders, operate from two IIS entries...
asp.net,iis
If you set 'Enable Integrated Windows Authentication' (which is the default), and the server requires integrated Windows authentication, then the user will be authenticated silently using current default credentials, if possible. If you disable Integrated Windows Authentication, the user will be prompted to supply credentials. See this KB article for...
php,wordpress,iis,permissions
Assuming: +-----------+ +----------+ Server A <-- Server B +-----------+ +----------+ First, lets look at the App Pool for Server A -> Site A and Server B -> Site B. I would advise using Impersonation versus a service account. This will allow you to leverage AD or...
powershell,iis,octopus-deploy
You can use the New-WebBinding cmdlet: New-WebBinding ` -Name $webSiteName ` -Protocol 'http' ` -Port $bindingPort ` -IPAddress $bindingIpAddress ` -HostHeader $bindingHost And use Get-WebBinding cmdlet to check whether the binding already exists....
c#,.net,powershell,tfs
Shai Raiten's Blog is great for learning the TFS API. For getting file history - read this post: http://blogs.microsoft.co.il/shair/2014/09/10/tfs-api-part-55-source-control-get-history/...
asp.net,iis,website,iis-6
The solution is to force IIS to write the changes from the cache into the Metabase.xml, this way the new configurations will be available for editing. This is done using the command - %systemroot%system32IIsCnfg.vbs /save ...
windows,tfs,kerberos
Kerberos is not a TFS capability but one of active directory. If you are able to get a nervous token on the TFS accounts with the delegated URL so in your SPN, then you only need to switch TFS over. You might find the option in the console but i...
visual-studio-2013,tfs
You have to connect to the new TFS from Team, Connect to Team Foundation Server menu.
iis,salt-stack,salt-contrib
I haven't had time to set up an environment to test, but I'm guessing it's a bug in the code.
c#,tfs,parallel-processing,invalidoperationexception,parallel.foreach
Please help me figure out what I'm doing wrong regarding the dictionaries. The exception is thrown because List<T> is not thread-safe. You have a shared resource which needs to be modified, using Parallel.ForEach won't really help, as you're moving the bottleneck to the lock, causing the contention there, which...
.net,iis,lucene,umbraco,application-pool
There's an issue with frequent app pool recycles when you update files in App_Data frequently (which Umbraco does). A MS HotFix was posted for it this morning: see MS download here. It sounds like this might be the issue that you've been having.
asp.net-mvc,iis
MVC nor IIS do any port listening or HTTP parsing. That's http.sys's job, which is the HTTP Server API. See MSDN: HTTP Server API, how exactly does http.sys work, Introduction to IIS Architectures, and especially HTTP Request Processing in IIS. IIS adds a lot of functionality on top of http.sys,...
tfs,tfsbuild
You can use copy activity which OOB and also if you use TFS 2013 default template it has post build activity which can run powershell
powershell,iis
You can enumerate all websites using the IIS PSDrive: Import-Module webadministration Get-ChildItem IIS:Sites select -expand Name % { Set-WebConfigurationProperty -PSPath MACHINE/WEBROOT/APPHOST -Location $_ -Filter system.webServer/asp -Name enableParentPaths -Value true } ...
tfs,msbuild,sonarqube,sonar-runner
@Techtwaddle is correct: the MSBuild.Runner invokes the sonar-runner. The MSBuild.Runner v0.9 does the following: fetches configuration settings from the SonarQube server; gathers information during the MSBuild phase; generates a sonar-project.properties file; invokes the sonar-runner to carry out further analysis. Some of the analysis is now performed before calling the sonar-runner....
tfs,tfs2013,tfs-workitem,tfs-process-template
This is not possible with standard Process Template Customizations in the way that the concatenated values are stored in a different field. There is a workaround available, but it requires a serverside plugin that triggers after a workitem is changed. an example implementation of such a plugin would be the...
visual-studio,tfs,visual-studio-online
You're right, you need to enable alternate credentials. You were looking in the wrong place to set it up, though. It looks like you were trying to use a service hook. Just follows steps 1 and 2 in the VSO OAuth documentation: Click on your user name, go to your...
c#,c++,winforms,visual-studio-2012,tfs
You'll have to use the VersionControlServer class from TFS Client Object Model. You can find an example here....
visual-studio-2013,tfs
I recommend installing 'TFS Source Control Explorer Extension' - From below link: https://visualstudiogallery.msdn.microsoft.com/af70cbb7-1e0d-4d16-bc57-cccc15370c51...
asp.net,asp.net-mvc,iis
You should be able to obtain the identity associated with the application's current user off the Request object: public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // foo is just a <DIV runat='server'/> foo.InnerHtml += Request.LogonUserIdentity.Name + '<br/>'; foo.InnerHtml += Request.LogonUserIdentity.User.Value+'<br/>'; } }...
iis,windows-server,windows-server-2012-r2,iis-8.5,bonobo
The error message describes what the problem is, the handlers configuration section is locked at the server level. Select the server node and open the Feature Delegation section, set Handler Mappings to Read/Write you can also run the following as an elevated administrator: %windir%system32inetsrvappcmd unlock config -section:system.webServer/handlers ...
-->Developer Community System Requirements and Compatibility License Terms TFS DevOps Blog SHA-1 Hashes
Note
If you are accessing this page from a non-English language version, and want to see the most up-to-date content, please visit this Release Notes page in English. You can change the language of this page by clicking the globe icon in the page footer and selecting your desired language.
In this article, you will find information regarding the newest release for Team Foundation Server 2018. Click the button to download.
To learn more about Team Foundation Server 2018, see the Team Foundation Server Requirements and Compatibility page. Visit the visualstudio.com/downloads page to download other TFS 2018 products.
Direct upgrade to Team Foundation Server 2018 Update 2 is supported from TFS 2012 and newer. If your TFS deployment is on TFS 2010 or earlier, you need to perform some interim steps before upgrading to TFS 2018 Update 2. Please see the chart below and the TFS Install page for more information.
Important
You do not need to upgrade to TFS 2018 RTM before upgrading to TFS 2018 Update 2.
You can now upgrade to TFS 2018 Update 2 and continue to connect your XAML controllers and run XAML builds. When we removed support for XAML build in TFS 2018 RTW and Update 1, some of you could not upgrade due to having legacy XAML builds, and we want to unblock you. Although TFS 2018 Update 2 supports XAML builds for your legacy builds, XAML build is deprecated and there will be no further investment, so we highly recommend converting to a newer build definition format.
We have added a lot of new value to Team Foundation Server 2018 Update 2. Some of the highlights include:
You can find details about features in each area:
When viewing a file, you usually see the version at the tip of the selected branch. The version of a file at the tip may change with new commits. If you copy a link from this view, your links can become stale because the URL only includes the branch name, not the commit SHA. You can now easily switch the Files view to update the URL to refer to the commit rather than the branch. If you press the 'y' key, your view switches to the tip commit of the current branch. You can then copy permanent links.
Sometimes mistakes can be made when cleaning up old repositories in source control. If a Git repository is deleted within the last 30 days, it can be recovered through the REST API. See the documentation for the list and recover operations for more information.
To improve security and compatibility, we have updated the list of ciphers supported for SSH. We have added two new ciphers and deprecated three, matching OpenSSH's direction. The deprecated ciphers continue to work in this release. They will be removed in the future as usage falls off.
Added:
Deprecated:
In this Update, you will find two new repository settings to help keep Git running smoothly.
Case enforcement switches the server from its default case-sensitive mode, where 'File.txt' and 'file.txt' refer to the same file, to a Windows and macOS-friendly mode where 'File.txt' and 'file.txt' are the same file. This setting affects files, folders, branches, and tags. It also prevents contributors from accidentally introducing case-only differences. Enabling case enforcement is recommended when most of your contributors are running Windows or macOS.
Limit file sizes allows you to prevent new or updated files from exceeding a size limit you set. The greater number of large files that exist in a Git repository's history, the worse clone and fetch operation performance becomes. This setting prevents accidental introduction of these files.
Searching for a file in commits or pull requests that have modified more than 1000 files was inefficient; you would need to click on Load more link several times to find the file that you were interested in. Now, when you filter content in the tree view, the search for that file is done across all files in the commit instead of just looking at the top 1000 files loaded. The performance of the commit details page is also improved when there are more than 1000 files modified.
You can perform a Git force push and update a remote ref even if it is not an ancestor of the local ref. This may cause others to lose commits and it can be very hard to identify the root cause. In the new pushes view, we have made force pushes noticeable in order to help troubleshoot issues related to missing commits.
Clicking on the force push tag takes you to the removed commit.
The Blame view is great for identifying the last person to change a line of code. However, sometimes you need to know who made the previous change to a line of code. The newest improvement in blame can help - View blame prior to this commit. As the name suggests, this feature allows you to jump back in time to the version of the file prior to the version that changed a particular line, and view the blame info for that version. You can continue to drill back in time looking at each version of the file that changed the selected line of code.
Two new features are available in the file diff viewer: Toggle Word Wrap and Toggle White Space. The first allows the word wrap setting to be applied while in a diff view. This is particularly useful for reviewing PRs that contain files without frequent line breaks - markdown files are a good example. The option to toggle white space is helpful when only whitespace has changed in a line or file. Toggling this setting displays and highlights the whitespace characters (dots for spaces, arrows for tabs, etc.) in the diff.
To manage these settings, click on the editor preferences gear in the pull request editor or diff view. In the Files view, select the User Preferences option on the right-click menu.
Select the various editor features including Show and diff white space, Enable word wrap, Enable code folding, and Show minimap.
Code folding (called 'outlining' in some editors) is also being enabled for the web view. When code folding is enabled, click on the minus signs to collapse sections of code -- click on plus signs to expand collapsed sections. The F1 command palette also exposes options for folding various indentation levels across an entire file, making it easier to read and review large files.
Now you can view the build and release status of merge commits in the Pushes page. By clicking the status next to the push, you will find the specific build or release that the push is included in so that you can verify success or investigate failure.
Markdown is great for adding rich formatting, links, and images in pull request (PR) descriptions and comments. Email notifications for PRs now display the rendered markdown instead of the raw contents, which improves readability.
Inline images are not yet rendered inline (they are just shown as links), but we have that on our backlog to add in the future.
The TFVC Windows Shell Extension, that gives a lightweight version control experience integrated into Windows File Explorer, now supports TFS 2018. This tool gives convenient access to many TFVC commands right in the Windows Explorer context menu.
Formerly part of the TFS Power tools, the tool has been released as a standalone tool on the Visual Studio Marketplace.
Previously, anyone who could view a Git repository could work with its pull requests. We have added a new permission called Contribute to pull requests that controls access to creating and commenting on pull requests. All users and groups that previously held the Read permission are also granted this new permission by default. The introduction of this new permission gives administrators additional flexibility and control. If you require your Readers group to be truly read-only, you can deny the Contribute to pull requests permission.
See the quickstart documentation for setting repository permissions for more information.
Many times, replies to pull request (PR) comments are pretty brief, acknowledging that a change will be or has been made. This is not a problem when viewing these comments in the web view, but if you are reading a comment in an email notification, the context of the original comment is lost. A simple 'I'll fix it' has no meaning.
Now, whenever a reply is made to a PR comment, the comment emails include the prior replies in the body of the email message. This allows the thread participants to see the full context of the comment right from their inbox - no need to open the web view.
The feature to complete work items when completing pull requests now has a new repository setting to control the default behavior. The new setting to Remember user preferences for completing work items with pull requests is enabled by default, and honors the user's last state when completing future pull requests in the repo. If the new setting is disabled, then the Complete linked work items after merging option defaults to disabled for all pull requests in the repository. Users can still choose to transition linked work items when completing PRs, but they will need to opt-in each time.
Using branch policies can be a great way to increase the quality of your code. However, those policies have been limited to only the integrations provided natively by TFS. Using the new pull request Status API and the corresponding branch policy, third party services can participate in the pull request workflow just like native TFS features.
When a service posts to the Status API for a pull request, it immediately appears in the PR details view in a new Status section. The status section shows the description and creates a link to the URL provided by the service. Status entries also support an action menu (...) that is extensible for new actions added by web extensions.
Status alone does not block completion of a PR - that is where the policy comes in. Once PR status has been posted, a policy can then be configured. From the branch policies experience, a new policy is available to Require approval from external services. Select + Add service to begin the process.
In the dialog, select the service that is posting the status from the list and select the desired policy options.
Once the policy is active, the status is shown in the Policies section, under Required or Optional as appropriate, and the PR completion is enforced as appropriate.
To learn more about the status API, and to try it out for yourself, check out the documentation and samples.
Extensions using pull request service hooks now have more details and filtering options for merge events. Any time a merge is attempted, the event is fired regardless of the success or failure of the merge. When a merge attempt results in a failure, details about the reason for the failure is included.
When attempting to complete work items with a pull request, it is possible that the associated work item cannot be transitioned to the completed state. For example, a specific field might be required and needs user input before the state can transition. We have improved the experience to inform you when something is blocking the work item transition, enabling you to take action to make the necessary changes.
You can now mention pull requests in PR comments and work item discussions. The experience for mentioning a PR is similar to that of a work item, but uses an exclamation point !
instead of a hash mark #
.
Whenever you want to mention a PR, enter a !
, and you will see an interactive experience for picking a PR from your list of recent PRs. Enter keywords to filter the list of suggestions, or enter the ID of the PR you want to mention. Once a PR is mentioned, it is rendered inline with the ID and the full title, and it will link to the PR details page.
Sometimes it is important to communicate extra information about a pull request to the reviewers. Maybe the pull request is still a work in progress, or it is a hotfix for an upcoming release - so you append some extra text in the title, perhaps a '[WIP]' prefix or 'DO NOT MERGE'. Labels now provide a way to tag pull requests with extra information that can be used to communicate important details and help organize pull requests.
Sometimes files are renamed or moved while a pull request is active. Previously, if there were comments on those renamed files, the latest view of the code would not display the comments. We have now improved comment tracking to follow the renames, displaying comments on the latest version of renamed or moved files.
Pull request diff views are great at highlighting the changes introduced in the source branch. However, changes to the target branch may cause the diff view to look different than expected. A new command is now available to view the diff of the 'preview' merge commit for the pull request - View merge commit. This merge commit is created to check for merge conflicts and to use with a pull request build, and it reflects what the merge commit will look like when the pull request is eventually completed. When the target branch has changes not reflected in the diff, the merge commit diff can be useful for seeing the latest changes in both the source and target branches.
Another command that is useful in conjunction with the View merge commit command is Restart merge (available on the same command menu). If the target branch has changed since the pull request was initially created, running this command creates a new preview merge commit, updating the merge commit diff view.
If you frequently have your code reviewed by the same individuals, you will find it easier than ever to add reviewers. When adding reviewers to your pull requests, a list of your recently added reviewers is automatically displayed when you put focus into the reviewers input box -- no need to search by name. Select them as you would any reviewer.
Auto-complete is a useful feature for teams using branch policies, but when using optional policies, it can be unclear exactly what is blocking a pull request from being completed. Now, when setting auto-complete for a pull request, the exact list of policy criteria that are holding up completion are clearly listed in the callout box. As each requirement is met, items are removed from the list until there are no remaining requirements and the pull request is merged.
Need to include an equation or mathematical expression in your pull request comments? You can now include KaTeX functions in your comments, using both inline and block commenting. See the list of supported functions for more information.
Whenever a topic branch is updated in a repository, a 'suggestion' to create a new pull request (PR) for the topic branch is shown. This is very useful for creating new PRs, and we have enabled it for those working in a forked repo, too. If you update a branch in a fork, the next time you visit the Code hub for either the fork or the upstream repo, you will see the suggestion to create a pull request. If you select the 'Create a pull request' link, you will be directed to the create PR experience, with the source and target branches and repos pre-selected.
Many times, a single repository contains code that is built by multiple continuous integration (CI) pipelines to validate the build and run tests. The integrated build policy now supports a path filtering option that makes it easy to configure multiple PR builds that can be required and automatically triggered for each PR. Just specify a path for each build to require, and set, the trigger and requirement options as desired.
In addition to build, status policies also have the path filtering option available. This allows any custom or third party policies to configure policy enforcement for specific paths.
This feature was prioritized based on a suggestion.
Assign a work item to yourself (Alt + i), jump to discussion (Ctrl + Alt + d), and copy a quick link to the work item (Shift + Alt + c) using keyboard shortcuts. For the full list of new shortcuts, type '?' with a work item form open or see the table below.
The Column options dialog used to configure the columns of the work item grid in the Backlog, Queries, and Test hubs has been updated to use a new panel design. Search to find a field, drag and drop to reorder columns, or remove existing columns you no longer want.
As your project's Shared Queries tree grows, it can be difficult to determine if a query is no longer used and can be deleted. To help you manage your Shared Queries, we have added two new pieces of metadata to our query REST APIs, last executed by and last executed date, so that you can write clean-up scripts to delete stale queries.
Based on customer feedback, we have updated the behavior of multi-line text fields in work item query results views in the web, Excel, and Visual Studio IDE to remove HTML formatting. When added as a column to the query, multi-line text fields now display as plain text. Here is an example of a feature with HTML in the description.
In the past, the query results would have rendered something like <div><b><u>Customer Value</u>...
Fields that support the 'In' query operator now support 'Not In'. Write queries for work items 'Not In' a list of IDs, 'Not In' a list of states, and much more, all without having to create many nested 'Or' clauses. This feature was prioritized based on a customer suggestion. Keep submitting those ideas and voting up those most important to you.
We have introduced two new query macros for the ID field to help you find work items that may be important to you. See what items you were mentioned in over the last 30 days using @RecentMentions or take a look at work items you have recently viewed or edited using @MyRecentActivity.
Notifications can now be defined using conditions on custom fields and tags; not only when they change but when certain values are met. This has been a top customer suggestion in UserVoice (see 6059328 and 2436843), and allows for a more robust set of notifications that can be set for work items.
We have added a new Mentioned pivot under the My work items page. Inside this pivot, you can review the work items where you have been mentioned in the last 30 days. With this new view, you can quickly take action on items that require your input and stay up to date on conversations that are relevant to you.
This same pivot is also available through our mobile experience, bringing consistency between both mobile and desktop.
The Delivery Plans extension now makes use of our common filtering component, and is consistent with our grid filtering experiences for work items and Boards. This filtering control brings improved usability and a consistent interface to all members of your team.
Many of you care about a specific plan or set of plans and use favorites for quick access to the content. First, we have updated the Plans hub to navigate to your most recently visited plan instead of the directory page. Second, once there, you can use the favorites picker to quickly switch to another plan or use the breadcrumb to navigate back to the directory page.
You can now expand or collapse all the items on the sprint Task board with just a single click.
Often, when migrating work items from another source, organizations want to retain all the original properties of the work item. For example, you may want to create a bug that retains the original created date and created by values from the system where it originated.
The API to update a work item has a bypassrule flag to enable that scenario. Previously, the identity who made that API request had to be a member of the Project Collection Administrators group. We have added a permission at the project level to execute the API with the bypassrule flag.
In TFS 2015, we introduced a web-based, cross-platform build system. XAML builds are not supported in TFS 2018 RTW or Update 1, but we have re-enabled XAML builds in TFS 2018 Update 2. We encourage you to migrate your XAML builds.
When you upgrade to TFS 2018 Update 2:
If you have any XAML build data in your team project collection, you will get a warning about the deprecation of XAML build features.
You will need to use VS or Team Explorer 2017 to edit XAML build definitions or to queue new XAML builds.
If you need to create new XAML build agents, you will need to install them using the TFS 2015 build agent installer.
For an explanation of our XAML build deprecation plan, see the Evolving TFS/Team Services build automation capabilities blog post.
You have been able to use phases to organize your build steps and to target different agents using different demands for each phase. We have added several capabilities to build phases so that you can now:
Specify a different agent queue for each phase. This means you can, for example:
Run tests faster by running them in parallel. Any phase that has parallelism configured as 'Multi-agent' and contains a 'VSTest' task now automatically parallelize test execution across the configured agent count.
Permit or deny scripts to access the OAuth token each phase. This means, for example, you can now allow scripts running in your build phase to communicate with VSTS over REST APIs, and in the same build definition block the scripts running in your test phase.
Run a phase only under specific conditions. For example, you can configure a phase to run only when previous phases succeed, or only when you are building code in the master branch.
To learn more, see Phases in Build and Release Management.
By popular request on UserVoice, you can now specify that a scheduled build not run when nothing has changed in your code. You can control this behavior using an option on the schedule. By default, we will not schedule a new build if your last scheduled build (from the same schedule) has passed and no further changes have been checked in to your repo.
You now have better integration for performing continuous integration (CI) builds if you use GitHub Enterprise for version control. Previously, you were limited to polling for code changes using the External Git connector, which may have increased the load on your servers and caused delays before builds were triggered. Now, with official GitHub Enterprise support, team CI builds are immediately triggered. In addition, the connection can be configured using various authentication methods, such as LDAP or built-in accounts.
The new Download Secure File task supports downloading (to agent machines) encrypted files from the VSTS Secure Files library. As the file is downloaded, it is decrypted and stored on the agent's disk. When the build or release completes, the file is deleted from the agent. This allows your build or release to use sensitive files, such as certificates or private keys that are otherwise securely encrypted and stored in VSTS. For more information, see Secure files documentation.
The Install Apple Provisioning Profile task already supports installing (on agent machines) provisioning profiles that are stored in the VSTS Secure Files library. Provisioning profiles are used by Xcode to sign and package Apple apps, such as for iOS, macOS, tvOS, and watchOS. Now, provisioning profiles can be installed from source code repositories. Though use of the Secure Files library is recommended for greater security of these files, this improvement addresses provisioning profiles already stored in source control.
Builds from GitHub or GitHub Enterprise already link to the relevant commit. It is equally important to be able to trace a commit to the builds that built it. That is now possible by enabling source tagging in TFS. While choosing your GitHub repository in a build definition, select the types of builds you want to tag, along with the tag format.
Then watch build tags appear on your GitHub or GitHub Enterprise repository.
For building certain Java projects, specific JDKs may be required but unavailable on agent machines. For example, projects may require older or different versions of IBM, Oracle, or open-source JDKs. The Java Tool Installer task downloads and installs the JDK needed by your project during a build or release. The JAVA_HOME environment variable is set accordingly for the duration of the build or release. Specific JDKs are available to the Java Tool Installer using a file share, a source code repository, or Azure Blob Storage.
The Xcode task has been updated with a new major version (4.*) that improves configuration of Xcode building, testing, and packaging. If your Xcode project has a single, shared scheme, it is automatically used. Additional inline help was added. Deprecated features, such as xcrun packaging, were removed from the Xcode task's properties. Existing build and release definitions must be modified to use this latest 4.* version of the Xcode task. For new definitions, if you need a previous Xcode task version's deprecated capabilities, you can select that version in your definition.
Continuous monitoring is an integral part of DevOps pipelines. Ensuring the app in a release is healthy after deployment is as critical as the success of the deployment process. Enterprises have adopted various tools for automatic detection of app health in production and for keeping track of customer reported incidents.Until now, approvers had to manually monitor the health of the apps from all the systems before promoting the release. However, Release Management now supports integrating continuous monitoring into release pipelines. Use this to ensure the system repeatedly queries all the health signals for the app until all of them are successful at the same time, before continuing the release.
You start by defining pre-deployment or post-deployment gates in the release definition. Each gate can monitor one or more health signals corresponding to a monitoring system of the app. Built-in gates are available for 'Azure monitor (application insight) alerts' and 'Work items'. You can integrate with other systems using the flexibility offered through Azure functions.
At the time of execution, the Release starts to sample all the gates and collect health signals from each of them. It repeats the sampling at each interval until signals collected from all the gates in the same interval are successful.
Initial samples from the monitoring systems may not be accurate, as not enough information may be available for the new deployment. The 'Delay before evaluation' option ensures the Release does not progress during this period, even if all samples are successful.
No agents or pipelines are consumed during sampling of gates. See the documentation for release gates for more information.
Multiple artifact sources can be added to a release definition and configured to trigger a release. A new release is created when a new build is available for either of the sources. The same deployment process is executed regardless of what source triggered the release. You can now customize the deployment process based on the triggering source. For auto-triggered releases, the release variable Release.TriggeringArtifact.Alias is now populated to identify the artifact source that triggered the release. This can be used in task conditions, phase conditions, and task parameters to dynamically adjust the process. For example, if you only need to deploy the artifacts that changed through environments.
Previously in role based security, when the security access roles were set, they were set for a user or group at hub level for Deployment groups, Variable groups, Agent queues, and Service endpoints. Now you can turn on and off inheritance for a particular entity so you can configure security just the way you want to.
Managing approvals with releases is now simpler. For pipelines having the same approver for multiple environments that deploy in parallel, the approver currently needs to act on each of the approvals separately. With this feature, you can now complete multiple pending approvals at the same time.
Release templates let you create a baseline for you to get started when defining a release process. Previously, you could upload new ones to your account, but now authors can include release templates in their extensions. You can find an example on the GitHub repo.
Similar to conditional build tasks, you can now run a task or phase only if specific conditions are met. This will help you in modeling rollback scenarios.
If the built-in conditions do not meet your needs, or if you need more fine-grained control over when the task or phase runs, you can specify custom conditions. Express the condition as a nested set of functions. The agent evaluates the innermost function and works its way outward. The final result is a Boolean value that determines if the task is to be run.
Service endpoints enable connection to external and remote services to execute tasks for a build or deployment. The endpoints are configured in project scope and shared between multiple build and release definitions. Service endpoint owners can now get a consolidated view of builds and deployments using an endpoint, that can help to improve auditing and governance.
You can now edit the default properties of Git and GitHub artifact types even after the artifact has been linked. This is particularly useful in scenarios where the branch for the stable version of the artifact has changed, and future continuous delivery releases should use this branch to obtain newer versions of the artifact.
You can now manually trigger a Deploy action to multiple environments of a release at the same time. This allows you to select multiple environments in a release with failed configurations or deployments, and re-deploy to all of the environments in one operation.
Consuming projects from Jenkins just got even better.
First, you can now consume Jenkins multi-branch pipeline projects as artifact sources in a release definition.
Second, while previously you could link Jenkins projects as artifacts only from the root folder of a Jenkins server, now Jenkins projects can be consumed when organized at folder level. You see the list of Jenkins projects, along with folder paths, in the list of sources from which you select the project to be consumed as artifact source.
This feature enables releases to use images stored in a Docker Hub registry or an Azure Container Registry (ACR). This is a first step towards supporting scenarios such as rolling out new changes region-by-region by using the geo-replication feature of ACR or deploying to an environment (such as production) from a container registry that has images for only the production environment.
You can now configure Docker Hub or ACR as a first-class artifact in the + Add artifact experience of a release definition. For now the release has to be triggered manually or by another artifact but we look forward to adding a trigger based on the push of a new image to the registry soon.
There are now several default version options when linking version control artifacts to a release definition. You can configure a specific commit/changeset or simply configure the latest version to be picked from the default branch. Normally you configure it to pick up the latest version, but this is especially useful in some environments where a golden artifact version needs to be specified for all future continuous deployments.
You can now configure a release trigger filter based on the default branch specified in the build definition. This is particularly helpful if your default build branch changes every sprint and the release trigger filters needs to be updated across all the release definitions. Now you just need to change the default branch in the build definition and all the release definitions automatically use this branch. For example, if your team is creating release branches for each sprint release payload, you update it in the build definition to point to a new sprint release branch and the release picks this up automatically.
Now you can set a trigger on a Package Management artifact in a Release definition so that a new release is automatically created when a new version of the package has been published. See the documentation for triggers in Release Management for more information.
Previously, when a variable group was added to a release definition, the variables it contained were available to all the environments in the release. Now, you have the flexibility to scope the variable groups to specific environment(s) instead. This makes them available to one environment but not other environments of the same release. This is great when you have an external service, such as an SMTP email service, which is different between environments.
When deploying containerized apps, the container image is first pushed to a container registry. After the push is complete, the container image can be deployed to a Web App for Containers or a Kubernetes cluster. You can now enable automatic creation of releases on updates to the images stored in Docker Hub or Azure Container Registry by adding them as an artifact source.
When a release with multiple artifacts is auto-triggered, default versions saved in the release definition are picked up for all artifacts. Previously, Jenkins artifacts did not have a default version setting, and so you couldn't set a continuous deployment trigger on a release using Jenkins as the secondary artifact.
Now, you can specify a default version for Jenkins artifacts, with the options you are familiar with:
Release gates enable addition of information driven approvals to the release pipelines. A set of health signals are collected repeatedly prior to or post deployment, to determine whether the release should be promoted to the next stage or not.A set of built-in gates are provided, and 'Invoke Azure function' has so far been recommended as a means to integrate with other services. We now simplify the route to integrate with other services and add gates through marketplace extensions. You can now contribute custom gate tasks and provide release definition authors an enhanced experience to configure the gate.
Learn more about authoring gate tasks.
Deployment Groups, that gives robust, out-of-the-box multi-machine deployment, is now generally available. With Deployment Groups, you can orchestrate deployments across multiple servers and perform rolling updates, while ensuring high availability of your application throughout. You can also deploy to servers on-premises or virtual machines on Azure or any cloud and have end-to-end traceability of deployed artifact versions down to the server level.
The agent-based deployment capability relies on the same build and deployment agents that are already available. You can use the full task catalog on your target machines in the Deployment Group phase. From an extensibility perspective, you can also use the REST APIs for deployment groups and targets for programmatic access.
Upstream sources for nuget.org and npmjs.com are now available. Benefits include the ability to manage (unlist, deprecate, unpublish, delete, etc.) packages saved from upstream sources as well as guaranteed saving of every upstream package you use.
Until now, TFS package feeds have not provided any way to automatically clean up older, unused package versions. For frequent package publishers, this could result in slower feed queries in the NuGet Package Manager and other clients until some versions were manually deleted.
We have now enabled retention policies on TFS feeds. Retention policies automatically delete the oldest version of a package once the retention threshold is met. Packages promoted to views are retained indefinitely, giving you the ability to protect versions that are used in production or used widely across your organization.
To enable retention policies, edit your feed and enter a value in the Maximum number of versions per package in the Retention policies section.
The Packages page has been updated to use our standard page layout, command bar control, and the new standard filter bar.
In the open source community, it is common to use a badge that links to the latest version of your package in your repository's README. You can now create badges for packages in your feeds. Just check the Enable package badges option in feed settings, select a package and then click Create badge. You can copy the badge URL directly or copy pre-generated Markdown that links the badge back to your package's details page.
We received a lot of feedback on the updated Package Management experience, where we moved the list of previous package versions into a breadcrumb picker on the package details page. We have added a new Versions pivot that brings more information about prior versions and makes it easier to copy the version number or get a link to an old version.
On the package list, you can now see the view(s) of each package version to quickly determine their quality. See the release views documentation for more information.) documentation for more information.
The npm task today works seamlessly with authenticated npm feeds (in Package Management or external registries like npm Enterprise and Artifactory), but until now it has been challenging to use a task runner like Gulp or an alternate npm client like Yarn unless that task also supported authenticated feeds. We have added a new npm Authenticate build task that adds credentials to your .npmrc so that subsequent tasks can use authenticated feeds successfully.
In the past, creating a feed sets the creating user as the only feed owner, which can cause administration challenges in large organizations if that user switches teams or leaves the organization. To remove this single point of failure, creating a feed now uses the user's current project context to get the Project Administrators group and make it an owner of the feed as well. As with any permission, you can remove this group and further customize feed permissions using the feed settings dialog.
Deleting unused packages can help keep the package list clean but sometimes it can be done by mistake. Now you can restore deleted packages from the Recycle Bin. Deleted packages are retained in the Recycle Bin for 30 days, giving you ample time to restore if you need to.
Although you could share the URL to a package found in the Packages hub in the past, it was often difficult to use because you needed to include a project in the URL, that may or may not apply to those using the link. With this Update, you can now share packages using a URL that automatically select a project the recipient has access to.
The URL format is: `https://<TFSserverURL>/_packaging?feed=<feed>&package=<package>&version=<version>&protocolType=<NuGet Npm Maven>&_a=package`
All parameters except `<TFSserverURL>` are optional, but if you provide a package, you must provide the protocol type.The Visual Studio Test task in build/release requires Visual Studio on the agent to run tests. Rather than installing Visual Studio to run tests in production environments or for merely distributing tests over multiple agents, use the new Visual Studio Test Platform Installer task. This task acquires the test platform from nuget.org and adds it to the tools cache. The installer task satisfies the vstest demand and a subsequent Visual Studio Test task in the definition can run without needing a full Visual Studio install on the agent.
From the task catalog, add the installer task in your definition.
Configure the subsequent Visual Studio Test task to use the bits acquired through the installer.
Note
Limitations: The Test Platform package on NuGet currently does not support running Coded UI test. Enabling support for Coded UI test is on the backlog.The Test Platform package on NuGet is cross-platform, but VSTest task currently does not support running .NET core tests. To run .NET core tests, use the 'dot net' task.
Last year, we started on the journey to unify agents across build, release, and test.This was intended to address various pain points associated with using WinRM based Deploy Test Agent and Run Functional Tests tasks. It also enables you to use the Visual Studio Test (VSTest) task for all your testing needs, including:
The unified agents approach also allows administrators to manage all machines that are used for CI/CD in a uniform manner.
We have delivered several crucial pieces to enable this capability, including:
With all the above now in place, we are ready to deprecate these two tasks. While existing definitions that use the deprecated tasks will continue to work, we encourage you to move to using VSTest to take advantage of continued enhancement over time.
Over time test assets accrue, and large applications can easily grow to thousands of tests. Teams are looking for better ways to navigate through large sets of test results to be productive while identifying test failures, associated root cause, or ownership of issues. To enable this, we have added three new filters under Tests Tab in Build and Release as Test Name, Container (DLLs) and Owner (Container Owner).
Additionally, the existing Outcome filter now provides the ability to filter for multiple outcomes. The various filter criterion are cumulative in nature. As a user, when I want to see the outcome of my tests for a change I just committed, I can filter on the Container (DLL name), Owner (DLL owner), Test Name, or all of them, to get to the results relevant to me.
Sometimes tests are flaky - they fail on one run and pass on another without any changes. Flaky tests can be frustrating and undermines confidence in test effectiveness - causing failures to be ignored and bugs to slip through. With this Update, we have deployed the first piece of a solution to help tackle the problem of flaky tests. You can now configure the Visual Studio Test task to re-run failed tests. The test results then indicate which tests initially failed and then passed on re-run. Support for re-run of data driven and ordered tests are coming later.
The Visual Studio Test task can be configured to control the maximum number of attempts to re-run failed tests and a threshold percentage for failures (e.g. only re-run tests if less than 20% of all tests failed) to avoid re-running tests in event of wide spread failures.
In the Tests tab under Build and Release, you can filter the test results with Outcome as 'Passed on rerun' to identify the tests that had an unreliable behavior during the run. This currently shows the last attempt for each test that passed on re-run. The Summary view is also modified to show 'Passed on rerun (n/m)' under Total tests, where n is the count of tests passed on re-run and m is total passed tests. A hierarchical view of all attempts is coming in next few sprints.
We enhanced the VSTest task to publish logs generated by different kind of logging statements corresponding to standard output and standard error for failed tests. We have also improved the preview experience to support viewing text and log file formats, with capability to search in the log files.
You can search for your favorite Wiki pages by title or content right alongside code and work items. You can read more about Wiki search in the Microsoft DevOps Blog.
Wiki can be used for a variety of content. Sometimes it can be useful to print content from Wiki to read in your spare time, add comments using pen and paper, or even share an offline PDF copy with those outside of your VSTS project. Now, simply click on the context menu of a page and select Print page. This feature was prioritized based on a suggestion.
Note
Currently this feature is not supported on Firefox.
You can now use shortcuts to perform common edit and view actions in Wiki even faster using only your keyboard.
While viewing a page, you can add, edit, or create a subpage, for example:
While editing a page, you can quickly save, save and close, or just close.
These are in addition to standard editing shortcuts such as Ctrl+B for bold, Ctrl+I for italics, Ctrl+K for linking etc. See the full list of keyboard shortcuts for more information.
You can now create rich README.MD files in the code repositories. The markdown rendering of the MD files in code repositories now supports HTML tags, Block quotes, Emojis, image resizing, and mathematical formulas. There is parity in markdown rendering in Wiki and MD files in code.
If your application deals with mathematical formulas and equations, you can now put them in Wiki using the LaTeX format.
Now you can reference work items in Wiki pages by pressing the '#' key to get a list of the most recently accessed work items and selecting the work item of interest. This is particularly useful while writing release notes, epics, specs, or other pages that require referring to a work item.
Now you can link a work item to a Wiki and vice versa. You can link work items to Wiki to create epic pages, release notes, and planning content that helps you track the work items associated with a Wiki page and validate what percentage of your epic page is complete.
Linked work items then show up on the Wiki page.
Add a link to a Wiki page from a work item through the new 'Wiki page' link type.
We heard you wanted a quicker and easier way to save a Wiki page. Now you can simply use Ctrl+S keyboard shortcut to save a page with a default revision message and continue editing. If you would like to add a custom revision message just click on the chevron next to the save button.
You can now paste rich text in the markdown editor of Wiki from any browser-based applications such as Confluence, OneNote, SharePoint, and MediaWiki. This is particularly useful for those who have created rich content such as complex tables and want to show it in Wiki. Simply copy content and paste it as HTML.
Earlier in Wiki, users could not reorder or re-parent pages using keyboard and this would impact users who prefer with keyboard operations. Now you can reorder pages by using Ctrl + Up or Ctrl + Down commands. You can also re-parent pages by clicking Move page in the context menu of a page and select the new parent page to move.
Filtering the navigation pane in Wiki shows the entire page hierarchy. For example, if you filter a page titled 'foobar' the filtered navigation pane would show all parent pages as well. This can cause confusion as to why pages not titled 'foobar' are showing up in filtered sets of results. Now, filtering content in Wiki highlights the text being searched to give a clear picture of the titles that are filtered and those that are not.
You will observe similar behavior in all code navigation panes as well. For example, the file navigation pane in pull requests, commits, changesets, and shelvesets.
Data shows that users almost always Preview a Wiki page multiple times while editing content. For each page edit, users click on Preview 1-2 times on average. This results in a slow and sub-optimal edit experience and can be particularly time consuming for those new to markdown. Now you can see the preview of your page while editing.
There are multiple areas in TFS where information associated to a particular individual is shown, such as, but not limited to: pull requests created by an individual, and work items assigned to an individual.However, there is limited information about the individual itself for you to gain complete context. The new Profile Card replaces the existing profile card in TFS. The updated profile card allows you to interact with and learn more about users within your TFS account.Through integrations with your default email and IM client, Active Directory (AD) users can send emails and start chats directly from the profile card. AD users can also see the organizational hierarchy within the profile card. Profile cards can be activated within project home page - team members section, version control, work items and Wiki sections by clicking on the contact card icon, profile picture, or users name within comments.
Circle avatars are here! All profile pictures in the service now displays in a circle shape, rather than a square. As an example, here is the actual pull request for this change (note the circular, non-square avatars).
You can now adorn projects with important keywords (tags). Tags are easily added and deleted directly from the project home page (by administrators) allowing users to quickly understand more about the purpose and scope of the project. We have more planned for how project tags can be leveraged, so stay tuned for more news here.
You can now re-order the groups on the account My favorites page using the up and down arrows in each group header.
We would love to hear from you! You can report a problem and track it through Developer Community and get advice on Stack Overflow. As always, if you have ideas on things you would like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.