.NET Blog https://devblogs.microsoft.com/dotnet/ Free. Cross-platform. Open source. A developer platform for building all your apps. Wed, 06 Aug 2025 17:05:00 +0000 en-US hourly 1 https://devblogs.microsoft.com/dotnet/wp-content/uploads/sites/10/2024/10/Microsoft-favicon-48x48.jpg .NET Blog https://devblogs.microsoft.com/dotnet/ 32 32 5 Copilot Chat Prompts .NET Devs Should Steal Today https://devblogs.microsoft.com/dotnet/5-copilot-chat-prompts-dotnet-devs-should-steal-today/ https://devblogs.microsoft.com/dotnet/5-copilot-chat-prompts-dotnet-devs-should-steal-today/#comments Wed, 06 Aug 2025 17:05:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57525 Discover 5 practical GitHub Copilot Chat prompts to boost .NET development productivity, from code optimization to security reviews.

The post 5 Copilot Chat Prompts .NET Devs Should Steal Today appeared first on .NET Blog.

]]>
Artificial intelligence is quickly becoming a key part of the modern .NET developer’s toolkit. With GitHub Copilot Chat, you can save countless hours, eliminate friction, and unlock new levels of creativity by simply asking the right questions. But what exactly should you ask? Here are five GitHub Copilot Chat prompts every .NET dev should be using right now!

1. “Explain this code and suggest optimizations.”

When you inherit a legacy project or revisit old code, understanding what’s going on can be daunting. Add the files for your C# code into Copilot Chat and ask for not only an explanation but also recommendations for performance, readability, or maintainability improvements. You’ll save time and might learn a new trick or two!

2. “Write unit tests for this method/class.”

Testing is essential but often overlooked when deadlines loom. Put your cursor in the method or class and let Copilot Chat generate robust unit tests using xUnit, MSTest, or NUnit. It’s a great way to ensure coverage and catch edge cases you might have missed.

3. “Convert this code to use async/await.”

Modern .NET apps should leverage asynchronous programming for scalability and responsiveness. If you’ve got synchronous code, ask Copilot Chat to rewrite it with async/await patterns. This helps future-proof your codebase and enhances user experience.

4. “Find and fix potential security issues in this snippet.”

Security is everyone’s responsibility, but it can be tough to spot every vulnerability. Ask Copilot Chat to review your code for common security pitfalls like SQL injection, XSS, or improper input validation. Let AI be your extra set of eyes before pushing to production.

5. “Generate sample data or mock objects for this model.”

Whether you’re prototyping an API or writing tests, realistic data is crucial. Copilot Chat can instantly generate mock data or objects for any model, helping you simulate real-world scenarios and get your app off the ground faster.

Conclusion

These prompts are just the beginning! Experiment with Copilot Chat, adapt these ideas, and create your own shortcuts. With the right questions, you can make AI your coding sidekick and take your .NET development to the next level. Check out even more great prompts at the Awesome GitHub Copilot Customizations repo.

What prompts are you using with Copilot Chat? Share your favorites in the comments below!

The post 5 Copilot Chat Prompts .NET Devs Should Steal Today appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/5-copilot-chat-prompts-dotnet-devs-should-steal-today/feed/ 1
.NET and .NET Framework August 2025 servicing releases updates https://devblogs.microsoft.com/dotnet/dotnet-and-dotnet-framework-august-2025-servicing-updates/ https://devblogs.microsoft.com/dotnet/dotnet-and-dotnet-framework-august-2025-servicing-updates/#comments Tue, 05 Aug 2025 22:54:56 +0000 https://devblogs.microsoft.com/dotnet/?p=57533 A recap of the latest servicing updates for .NET and .NET Framework for August 2025.

The post .NET and .NET Framework August 2025 servicing releases updates appeared first on .NET Blog.

]]>
Welcome to our combined .NET servicing updates for August 2025. Let’s get into the latest release of .NET & .NET Framework, here is a quick overview of what’s new in our servicing releases:

Security improvements

.NET and .NET Framework has been refreshed with the latest update as of August 05, 2025. This update contains non-security fixes.

This month you will find non-security fixes:

.NET 8.0 .NET 9.0
Release Notes 8.0.19 9.0.8
Installers and binaries 8.0.19 9.0.8
Container Images images images
Linux packages 8.0.19 9.0.8
Known Issues 8.0 9.0

Release changelogs

.NET Framework August 2025 Updates

This month, there are no new security updates, but there are new non-security updates available. For recent .NET Framework servicing updates, be sure to browse our release notes for .NET Framework for more details.

See you next month

That’s it for this month, make sure you update to the latest service release today.

The post .NET and .NET Framework August 2025 servicing releases updates appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/dotnet-and-dotnet-framework-august-2025-servicing-updates/feed/ 2
Exploring new Agent Quality and NLP evaluators for .NET AI applications https://devblogs.microsoft.com/dotnet/exploring-agent-quality-and-nlp-evaluators/ https://devblogs.microsoft.com/dotnet/exploring-agent-quality-and-nlp-evaluators/#respond Tue, 05 Aug 2025 17:05:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57457 Introducing Agent Quality and NLP evaluators in the Microsoft.Extensions.AI.Evaluation libraries.

The post Exploring new Agent Quality and NLP evaluators for .NET AI applications appeared first on .NET Blog.

]]>
When building AI applications, comprehensive evaluation is crucial to ensure your systems deliver accurate, reliable, and contextually appropriate responses. We’re excited to announce key enhancements to the Microsoft.Extensions.AI.Evaluation libraries with new evaluators that expand evaluation capabilities in two key areas: agent quality assessment and natural language processing (NLP) metrics.

Agent Quality evaluators

The Microsoft.Extensions.AI.Evaluation.Quality package now includes three new evaluators specifically designed to assess how well AI agents perform in conversational scenarios involving tool use:

NLP (Natural Language Processing) evaluators

We’ve also introduced a new package, Microsoft.Extensions.AI.Evaluation.NLP, containing evaluators that implement common NLP algorithms for evaluating text similarity:

  • BLEUEvaluator: Implements the BLEU (Bilingual Evaluation Understudy) metric for measuring text similarity
  • GLEUEvaluator: Provides the GLEU (Google BLEU) metric, a variant optimized for sentence-level evaluation
  • F1Evaluator: Calculates F1 scores for text similarity and information retrieval tasks

Note

Unlike other evaluators in the Microsoft.Extensions.AI.Evaluation libraries, the NLP evaluators do not require an AI model to perform evaluations. Instead, they use traditional NLP techniques such as text tokenization and n-gram analysis to compute similarity scores.

These new evaluators complement the quality and safety-focused evaluators we covered in earlier posts below. Together with custom, domain-specific evaluators that you can create using the Microsoft.Extensions.AI.Evaluation libraries, they provide a robust evaluation toolkit for your .NET AI applications.

Setting up your LLM connection

The agent quality evaluators require an LLM to perform evaluation. The code example that follows shows how to create an IChatClient that connects to a model deployed on Azure OpenAI for this. For instructions on how to deploy an OpenAI model in Azure see: Create and deploy an Azure OpenAI in Azure AI Foundry Models resource.

Note

We recommend using the GPT-4o or GPT-4.1 series of models when running the below example.

While the Microsoft.Extensions.AI.Evaluation libraries and the underlying core abstractions in Microsoft.Extensions.AI support a variety of different models and LLM providers, the evaluation prompts used within the evaluators in the Microsoft.Extensions.AI.Evaluation.Quality package have been tuned and tested against OpenAI models such as GPT-4o and GPT-4.1. It is possible to use other models by supplying an IChatClient that can connect to your model of choice. However, the performance of those models against the evaluation prompts may vary and may be especially poor for smaller / local models.

First, set the required environment variables. For this, you will need the endpoint for your Azure OpenAI resource, and the deployment name for your deployed model. You can copy these values from the Azure portal and paste them in the environment variables below.

SET EVAL_SAMPLE_AZURE_OPENAI_ENDPOINT=https://<your azure openai resource name>.openai.azure.com/
SET EVAL_SAMPLE_AZURE_OPENAI_MODEL=<your model deployment name (e.g., gpt-4o)>

The example uses DefaultAzureCredential for authentication. You can sign in to Azure using developer tooling such as Visual Studio or the Azure CLI.

Setting up a test project to run the example code

Next, let’s create a new test project to demonstrate the new evaluators. You can use any of the following approaches:

Using Visual Studio

  1. Open Visual Studio
  2. Select File > New > Project…
  3. Search for and select MSTest Test Project
  4. Choose a name and location, then click Create

Using Visual Studio Code with C# Dev Kit

  1. Open Visual Studio Code
  2. Open Command Palette and select .NET: New Project…
  3. Select MSTest Test Project
  4. Choose a name and location, then select Create Project

Using the .NET CLI

dotnet new mstest -n EvaluationTests
cd EvaluationTests

After creating the project, add the necessary NuGet packages:

dotnet add package Azure.AI.OpenAI
dotnet add package Azure.Identity
dotnet add package Microsoft.Extensions.AI.Evaluation
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
dotnet add package Microsoft.Extensions.AI.Evaluation.NLP --prerelease
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease

Next, copy the following code into the project (inside Test1.cs). The example demonstrates how to run agent quality and NLP evaluators via two separate unit tests defined in the same test class.

using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.AI.Evaluation;
using Microsoft.Extensions.AI.Evaluation.NLP;
using Microsoft.Extensions.AI.Evaluation.Quality;
using Microsoft.Extensions.AI.Evaluation.Reporting;
using Microsoft.Extensions.AI.Evaluation.Reporting.Storage;
using DescriptionAttribute = System.ComponentModel.DescriptionAttribute;

namespace EvaluationTests;

#pragma warning disable AIEVAL001 // The agent quality evaluators used below are currently marked as [Experimental].

[TestClass]
public class Test1
{
    private static readonly ReportingConfiguration s_agentQualityConfig = CreateAgentQualityReportingConfiguration();
    private static readonly ReportingConfiguration s_nlpConfig = CreateNLPReportingConfiguration();

    [TestMethod]
    public async Task EvaluateAgentQuality()
    {
        // This example demonstrates how to run agent quality evaluators (ToolCallAccuracyEvaluator,
        // TaskAdherenceEvaluator, and IntentResolutionEvaluator) that assess how well an AI agent performs tasks
        // involving tool use and conversational interactions.

        await using ScenarioRun scenarioRun = await s_agentQualityConfig.CreateScenarioRunAsync("Agent Quality");

        // Get a conversation that simulates a customer service agent using tools to assist a customer.
        (List<ChatMessage> messages, ChatResponse response, List<AITool> toolDefinitions) =
            await GetCustomerServiceConversationAsync(chatClient: scenarioRun.ChatConfiguration!.ChatClient);

        // The agent quality evaluators require tool definitions to assess tool-related behaviors.
        List<EvaluationContext> additionalContext =
        [
            new ToolCallAccuracyEvaluatorContext(toolDefinitions),
            new TaskAdherenceEvaluatorContext(toolDefinitions),
            new IntentResolutionEvaluatorContext(toolDefinitions)
        ];

        // Run the agent quality evaluators against the response.
        EvaluationResult result = await scenarioRun.EvaluateAsync(messages, response, additionalContext);

        // Retrieve one of the metrics (example: Intent Resolution).
        NumericMetric intentResolution = result.Get<NumericMetric>(IntentResolutionEvaluator.IntentResolutionMetricName);

        // By default, a Value < 4 is interpreted as a failing score for the Intent Resolution metric.
        Assert.IsFalse(intentResolution.Interpretation!.Failed);

        // Results are also persisted to disk under the storageRootPath specified below. You can use the dotnet aieval
        // command line tool to generate an HTML report and view these results.
    }

    [TestMethod]
    public async Task EvaluateNLPMetrics()
    {
        // This example demonstrates how to run NLP (Natural Language Processing) evaluators (BLEUEvaluator,
        // GLEUEvaluator and F1Evaluator) that measure text similarity between a model's output and supplied reference
        // text.

        await using ScenarioRun scenarioRun = await s_nlpConfig.CreateScenarioRunAsync("NLP");

        // Set up the text similarity evaluation inputs. Response represents an example model output, and
        // referenceResponses represent a set of ideal responses that the model's output will be compared against.
        const string Response =
            "Paris is the capital of France. It's famous for the Eiffel Tower, Louvre Museum, and rich cultural heritage";

        List<string> referenceResponses =
        [
            "Paris is the capital of France. It is renowned for the Eiffel Tower, Louvre Museum, and cultural traditions.",
            "Paris, the capital of France, is famous for its landmarks like the Eiffel Tower and vibrant culture.",
            "The capital of France is Paris, known for its history, art, and iconic landmarks like the Eiffel Tower."
        ];

        // The NLP evaluators require one or more reference responses to compare against the model's output.
        List<EvaluationContext> additionalContext =
        [
            new BLEUEvaluatorContext(referenceResponses),
            new GLEUEvaluatorContext(referenceResponses),
            new F1EvaluatorContext(groundTruth: referenceResponses.First())
        ];

        // Run the NLP evaluators.
        EvaluationResult result = await scenarioRun.EvaluateAsync(Response, additionalContext);

        // Retrieve one of the metrics (example: F1).
        NumericMetric f1 = result.Get<NumericMetric>(F1Evaluator.F1MetricName);

        // By default, a Value < 0.5 is interpreted as a failing score for the F1 metric.
        Assert.IsFalse(f1.Interpretation!.Failed);

        // Results are also persisted to disk under the storageRootPath specified below. You can use the dotnet aieval
        // command line tool to generate an HTML report and view these results.
    }

    private static ReportingConfiguration CreateAgentQualityReportingConfiguration()
    {
        // Create an IChatClient to interact with a model deployed on Azure OpenAI.
        string endpoint = Environment.GetEnvironmentVariable("EVAL_SAMPLE_AZURE_OPENAI_ENDPOINT")!;
        string model = Environment.GetEnvironmentVariable("EVAL_SAMPLE_AZURE_OPENAI_MODEL")!;
        var client = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential());
        IChatClient chatClient = client.GetChatClient(deploymentName: model).AsIChatClient();

        // Enable function invocation support on the chat client. This allows the chat client to invoke AIFunctions
        // (tools) defined in the conversation.
        chatClient = chatClient.AsBuilder().UseFunctionInvocation().Build();

        // Create a ReportingConfiguration for the agent quality evaluation scenario.
        return DiskBasedReportingConfiguration.Create(
            storageRootPath: "./eval-results", // The evaluation results will be persisted to disk under this folder.
            evaluators: [new ToolCallAccuracyEvaluator(), new TaskAdherenceEvaluator(), new IntentResolutionEvaluator()],
            chatConfiguration: new ChatConfiguration(chatClient),
            enableResponseCaching: true);

        // Since response caching is enabled above, all LLM responses produced via the chatClient above will also be
        // cached under the storageRootPath so long as the inputs being evaluated stay unchanged, and so long as the
        // cache entries do not expire (cache expiry is set at 14 days by default).
    }

    private static ReportingConfiguration CreateNLPReportingConfiguration()
    {
        // Create a ReportingConfiguration for the NLP evaluation scenario.
        // Note that the NLP evaluators do not require an LLM to perform the evaluation. Instead, they use traditional
        // NLP techniques (text tokenization, n-gram analysis, etc.) to compute text similarity scores.

        return DiskBasedReportingConfiguration.Create(
            storageRootPath: "./eval-results", // The evaluation results will be persisted to disk under this folder.
            evaluators: [new BLEUEvaluator(), new GLEUEvaluator(), new F1Evaluator()]);
    }

    private static async Task<(List<ChatMessage> messages, ChatResponse response, List<AITool> toolDefinitions)>
        GetCustomerServiceConversationAsync(IChatClient chatClient)
    {
        // Get a conversation that simulates a customer service agent using tools (such as GetOrders() and
        // GetOrderStatus() below) to assist a customer.

        List<ChatMessage> messages =
        [
            new ChatMessage(ChatRole.System, "You are a helpful customer service agent. Use tools to assist customers."),
            new ChatMessage(ChatRole.User, "Could you tell me the status of the last 2 orders on my account #888?")
        ];

        List<AITool> toolDefinitions = [AIFunctionFactory.Create(GetOrders), AIFunctionFactory.Create(GetOrderStatus)];
        var options = new ChatOptions() { Tools = toolDefinitions, Temperature = 0.0f };

        ChatResponse response = await chatClient.GetResponseAsync(messages, options);

        return (messages, response, toolDefinitions);
    }

    [Description("Gets the orders for a customer")]
    private static IReadOnlyList<CustomerOrder> GetOrders(
        [Description("The customer account number")] int accountNumber)
    {
        return accountNumber switch
        {
            888 => [new CustomerOrder(123), new CustomerOrder(124)],
            _ => throw new InvalidOperationException($"Account number {accountNumber} is not valid.")
        };
    }

    [Description("Gets the delivery status of an order")]
    private static CustomerOrderStatus GetOrderStatus(
        [Description("The order ID to check")] int orderId)
    {
        return orderId switch
        {
            123 => new CustomerOrderStatus(orderId, "shipped", DateTime.Now.AddDays(1)),
            124 => new CustomerOrderStatus(orderId, "delayed", DateTime.Now.AddDays(10)),
            _ => throw new InvalidOperationException($"Order with ID {orderId} not found.")
        };
    }

    private record CustomerOrder(int OrderId);
    private record CustomerOrderStatus(int OrderId, string Status, DateTime ExpectedDelivery);
}

Running the tests and generating the evaluation report

Next, let’s run the above unit tests. You can either use Visual Studio or Visual Studio Code’s Test Explorer or run dotnet test from the command line.

After running the tests, you can generate an HTML report containing results for both the “Agent Quality” and “NLP” scenarios in the example above using the dotnet aieval tool.

First, install the tool locally in your project:

dotnet tool install Microsoft.Extensions.AI.Evaluation.Console --create-manifest-if-needed

Then generate and open the report:

dotnet aieval report -p <path to 'eval-results' folder under the build output directory for the above project> -o .\report.html --open

The --open flag will automatically open the generated report in your default browser, allowing you to explore the evaluation results interactively. Here’s a peek at the generated report – this screenshot shows the details revealed when you click on the “Intent Resolution” metric under the “Agent Quality” scenario.

A screenshot depicting the generated evaluation report

Learn more and provide feedback

For more comprehensive examples that demonstrate various API concepts, functionality, best practices and common usage patterns for the Microsoft.Extensions.AI.Evaluation libraries, explore the API Usage Examples in the dotnet/ai-samples repository. Documentation and tutorials for the evaluation libraries are also available under – The Microsoft.Extensions.AI.Evaluation libraries.

We encourage you to try out these evaluators in your AI applications and share your feedback. If you encounter any issues or have suggestions for improvements, please report them on GitHub. Your feedback helps us continue to enhance the evaluation libraries and build better tools for the .NET AI development community.

Happy evaluating!

The post Exploring new Agent Quality and NLP evaluators for .NET AI applications appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/exploring-agent-quality-and-nlp-evaluators/feed/ 0
.NET Conf 2025 – Announcing the Call for Content https://devblogs.microsoft.com/dotnet/dotnet-conf-2025-announcing-the-call-for-content/ https://devblogs.microsoft.com/dotnet/dotnet-conf-2025-announcing-the-call-for-content/#respond Tue, 05 Aug 2025 16:00:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57522 The .NET Conf 2025 Call for Content is now open! Join us November 11-13 for the premier .NET virtual event celebrating .NET 10. Submit your session proposal by August 31st and share your .NET expertise with developers worldwide.

The post .NET Conf 2025 – Announcing the Call for Content appeared first on .NET Blog.

]]>
.NET Conf 2025 Banner

Hey .NET developers! 🎉 The moment we’ve all been waiting for is here – the Call for Content for .NET Conf 2025 is officially open!

We want YOU to be part of this amazing celebration of everything .NET. Mark your calendars: November 11-13, 2025, will be three days of pure .NET goodness as we launch .NET 10 and dive into all the cool stuff happening in our ecosystem. Head over to dotnetconf.net to add it to your calendar so you don’t miss it!

Why .NET Conf Rocks

.NET Conf is our free, three-day virtual celebration that brings together .NET developers from around the world. It’s a community effort – we partner with Microsoft, the .NET Foundation, and awesome sponsors to make it happen. But honestly? The best part is YOU – the incredible .NET community.

This is where we get together to share what we’re building, learn from each other, and get genuinely excited about the future of .NET development.

This year we’re launching .NET 10, diving deep into .NET Aspire’s latest features, and exploring how AI is changing the game for .NET developers.

We Want to Hear from YOU

Here’s the thing – .NET Conf has always been about our amazing community. We’re looking for passionate developers who want to share their stories, show off their cool projects, and teach others what they’ve learned along the way.

Whether you’ve been speaking at conferences for years or you’ve never presented before but have something awesome to share – we want to hear from you! Seriously, some of our best sessions have come from first-time speakers who just had something cool they wanted to show the world.

What Kind of Sessions Are We Looking For?

We want 30-minute sessions (including Q&A time) that show off what’s possible with .NET. Here’s what gets us excited:

  • Web stuff: Cool ASP.NET Core projects, Blazor adventures, or that awesome web architecture you built
  • Mobile & Desktop: .NET MAUI apps, cross-platform tricks, or how you modernized that old desktop app
  • AI & Machine Learning: ML.NET projects, AI integrations, or how you’re using AI in your .NET apps
  • IoT & Edge: .NET running on tiny devices, IoT solutions, or embedded projects
  • Games: Building games with .NET, Unity integrations, or game development tips
  • Cloud & Containers: Your containerization journey, microservices patterns, or cloud-native adventures
  • DevOps: CI/CD pipelines that actually work, deployment strategies, or how you made your team more productive
  • Open Source: That library you built, your contribution story, or community projects you love

We’d love to see what you’re doing with .NET 9 or 10, but honestly, if you’ve got something cool and .NET-related that’ll make developers go “wow, I want to try that!” – we want to hear about it!

What Makes a Session Stand Out?

Insider Tip

The Microsoft .NET team will be showing off the shiny new features and big announcements. To give your session the best shot, focus on real-world content—your experiences, your projects, and your “aha!” moments.

Think about sessions like:

  • Your war stories: What you learned building that challenging project
  • Architecture deep-dives: How you solved complex problems in your apps
  • Open source adventures: That library you created or contributed to
  • Best practices you discovered: Patterns and techniques that actually work
  • Side projects: That fun thing you built that shows off .NET in a cool way
  • Productivity hacks: Tools and techniques that make you a more effective developer

The community wants to hear about what you’re building with .NET, not rehash what they can already read in the release notes. Show us your creativity!

The Important Stuff

Don't Procrastinate!

The Call for Content closes on August 31, 2025, at 11:59 PM PDT. Trust us, you don’t want to be scrambling at the last minute!

Here’s What You Need to Know

  • When: November 11-13, 2025
  • Where: Online (present from wherever you are!)
  • How long: 30 minutes including Q&A
  • Time zones: Present in your own time zone – we’ll figure out the schedule magic
  • Sessions per speaker: We’re limiting it to 1 session per person so more folks can participate

Ready to Submit? Here’s How

  1. Head over to: sessionize.com/net-conf-2025
  2. Write an awesome proposal: Give us a catchy title and tell us why your session will be amazing
  3. Tell us about you: Share your background and any speaking experience (but don’t stress if you’re new!)
  4. Show your creds: Got videos of past talks? Links to your projects? Throw them in there!

Insider Tip

Include videos of past talks or demos of your projects in your proposal. It helps us see your presentation style and gets us excited about your session!

Come Celebrate with the .NET Family

Look, .NET Conf isn’t just another conference – it’s our yearly family reunion! It’s where developers from every corner of the planet come together to geek out about code, share those “I can’t believe that worked!” moments, and push each other to build even cooler stuff.

Whether you’re the person building the next big web app, creating mobile experiences that users love, or figuring out how to make AI work in your business apps – your story matters. The .NET community wants to celebrate your wins, learn from your mistakes, and cheer you on as you tackle your next challenge.

Your Turn to Shine

Here’s the thing – the .NET ecosystem is amazing because of people like you. That unique way you solved a problem, that library you built in your spare time, that “what if I tried this?” experiment that actually worked – that stuff is pure gold to other developers.

Don’t let that voice in your head tell you “someone else probably knows this better.” Nope! Your perspective, your journey, your hard-won insights could be exactly what another developer needs to hear right now.

Let’s Make This the Best .NET Conf Yet

.NET Conf 2025 is going to be incredible, but it won’t be complete without voices from our amazing community. We’ve got the submission portal open through August 31st, and we’re genuinely excited to see what awesome sessions you’ll propose.

Here’s what we know: .NET Conf is where magic happens. It’s where a casual conversation in the chat leads to your next big project idea. It’s where that demo you’re nervous about becomes the solution someone’s been searching for. It’s where you realize you’re part of something way bigger than just writing code.

Your session could be the one that sparks the next breakthrough, solves a problem thousands of developers are facing, or just makes someone’s day a little brighter with a cool demo.

Don’t wait – submit your session today and let’s make .NET Conf 2025 absolutely unforgettable! And while you’re at it, head over to dotnetconf.net to add the event to your calendar so you don’t miss the big day.

Happy coding, friends! We can’t wait to see you there! 🚀✨

The post .NET Conf 2025 – Announcing the Call for Content appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/dotnet-conf-2025-announcing-the-call-for-content/feed/ 0
The new Dependabot NuGet updater: 65% faster with native .NET https://devblogs.microsoft.com/dotnet/the-new-dependabot-nuget-updater/ https://devblogs.microsoft.com/dotnet/the-new-dependabot-nuget-updater/#comments Mon, 04 Aug 2025 15:00:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57460 Discover the new Dependabot NuGet updater that improves performance, accuracy, and developer experience by leveraging native .NET tooling.

The post The new Dependabot NuGet updater: 65% faster with native .NET appeared first on .NET Blog.

]]>
If you’ve ever waited impatiently for Dependabot to update your .NET dependencies, or worse, watched it fail with cryptic errors, we have some great news. Over the past year, the Dependabot team has worked on a refactor of the NuGet updater, and the results are impressive.

From hybrid to native

The previous NuGet updater used a hybrid solution that relied heavily on manual XML parsing and string replacement operations written in Ruby. While this approach worked for basic scenarios, it struggled with the complexity and nuances of modern .NET projects. The new updater takes a completely different approach by using .NET’s native tooling directly.

Instead of trying to reverse-engineer what NuGet and MSBuild do, the new updater leverages actual .NET tooling:

This shift from manual XML manipulation to using the actual .NET toolchain means the updater now behaves exactly like the tools developers use every day.

Performance and reliability improvements

The improvements in the new updater are dramatic. The test suite that previously took 26 minutes now completes in just 9 minutes—a 65% reduction in runtime. But speed is only part of the story. The success rate for updates has jumped from 82% to 94%, meaning significantly fewer failed updates that require manual intervention.

These improvements work together to deliver a faster, more reliable experience. When Dependabot runs on your repository, it spends less time processing updates and succeeds more often—reducing both the wait time and the manual intervention needed to keep your dependencies current.

Real dependency detection with MSBuild

One of the most significant improvements is how the updater discovers and analyzes dependencies. Previously, the Ruby-based parser would attempt to parse project files as XML and guess what the final dependency graph would look like. This approach was fragile and missed complex scenarios.

The new updater uses MSBuild’s project evaluation engine to properly understand your project’s true dependency structure. This means it can now handle complex scenarios that previously caused problems.

For example, the old parser missed conditional package references like this:

<ItemGroup Condition="'$(TargetFramework)' == 'net8.0'">
  <PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
</ItemGroup>

With the new MSBuild-based approach, the updater can handle

  • Conditional package references based on target framework or build configuration
  • Directory.Build.props and Directory.Build.targets that modify dependencies
  • MSBuild variables and property evaluation throughout the project hierarchy
  • Complex package reference patterns that weren’t reliably detected before

Dependency resolution solving

One of the most impressive features of the new updater is its sophisticated dependency resolution engine. Instead of updating packages in isolation, it now performs comprehensive conflict resolution. This includes two key capabilities:

Transitive dependency updates

When you have a vulnerable transitive dependency that can’t be directly updated, the updater will now automatically find the best way to resolve the vulnerability. Let’s look at a real scenario where your app depends on a package that has a vulnerable transitive dependency:

YourApp
└── PackageA v1.0.0
    └── TransitivePackage v2.0.0 (CVE-2024-12345)

The new updater follows a smart resolution strategy:

  1. First, it checks if PackageA has a newer version available that depends on a non-vulnerable version of TransitivePackage. If PackageA v2.0.0 depends on TransitivePackage v3.0.0 (which fixes the vulnerability), Dependabot will update PackageA to v2.0.0.

  2. If no updated version of PackageA is available, Dependabot will add a direct dependency on a non-vulnerable version of TransitivePackage to your project. This leverages NuGet’s ‘direct dependency wins’ rule, where direct dependencies take precedence over transitive ones:

<PackageReference Include="PackageA" Version="1.0.0" />
<PackageReference Include="TransitivePackage" Version="3.0.0" />

With this approach, even though PackageA v1.0.0 still references TransitivePackage v2.0.0, NuGet will use v3.0.0 because it’s a direct dependency of your project. This ensures your application uses the secure version without waiting for PackageA to be updated.

Related package updates

The updater also identifies and updates related packages to avoid version conflicts. If updating one package in a family (like Microsoft.Extensions.* packages) would create version mismatches with related packages, the updater automatically updates the entire family to compatible versions.

This intelligent conflict resolution dramatically reduces the number of failed updates and eliminates the manual work of resolving package conflicts.

Honoring global.json

The new updater now properly respects global.json files, a feature that was inconsistently supported in the previous version. If your project specifies a particular .NET SDK version, the updater will install the exact SDK version specified in your global.json. This ensures that the updater evaluates dependency updates using the same .NET SDK version that your development team and CI/CD pipelines use, eliminating a common source of inconsistencies.

This improvement complements Dependabot’s recently added capability to update .NET SDK versions in global.json files. While the SDK updater keeps your .NET SDK version current with security patches and improvements, the NuGet updater respects whatever SDK version you’ve chosen—whether manually specified or automatically updated by Dependabot. This seamless integration means you get the best of both worlds: automated SDK updates when you want them, and consistent package dependency resolution that honors your SDK choices.

Full Central Package Management support

Central Package Management (CPM) has become increasingly popular in .NET projects for managing package versions across multiple projects. The previous updater had limited support for CPM scenarios, often requiring manual intervention.

The new updater provides comprehensive CPM support. It automatically detects Directory.Packages.props files, properly updates versions in centralized version files, supports package overrides in individual projects, and handles transitive dependencies managed through CPM. Whether you’re using CPM for version management, security vulnerability management, or both, the new updater handles these scenarios seamlessly.

Support for all compliant NuGet feeds

The previous updater struggled with private NuGet feeds, especially those with non-standard authentication or API implementations. The new updater uses NuGet’s official client libraries. This means it automatically supports all NuGet v2 and v3 feeds, including nuget.org, Azure Artifacts, and GitHub Packages. It also:

  • Works with standard authentication mechanisms like API keys or personal access tokens
  • Handles feed-specific behaviors and quirks that the NuGet client manages
  • Supports package source mapping configurations for enterprise scenarios

If your .NET tools can access a feed, Dependabot can too.

What this means for you

If you’re using Dependabot for .NET projects, you should notice these improvements immediately. Faster updates mean dependency scans and update generation happen more quickly. More successful updates result in fewer failed updates that require manual intervention. Better accuracy ensures updates that properly respect your project’s configuration and constraints. And when updates do fail, you’ll get clearer errors with actionable error messages.

You don’t need to change anything in your dependabot.yml configuration—you automatically get these improvements for all .NET projects.

Looking forward

This rewrite represents more than just performance improvements—it’s a foundation for future enhancements. By building on .NET’s native tooling, the Dependabot team will be able to add support for new .NET features as they’re released, improve integration with .NET developer workflows, extend capabilities to handle more complex enterprise scenarios, and provide better diagnostics and debugging information.

The new architecture also makes it easier for the community to contribute improvements and fixes, as we rewrote the codebase in C# and leverage the same tools and libraries that .NET developers use every day. This means that developers can make contributions using familiar .NET development practices, making it easier for the community to help shape the future of Dependabot’s NuGet support.

Try it out

The new NuGet updater is already live and processing updates for .NET repositories across GitHub. If you haven’t enabled Dependabot for your .NET projects yet, now is a great time to start. Here’s a minimal configuration to get you started:

version: 2
updates:
  - package-ecosystem: "nuget"
    directory: "/"
    schedule:
      interval: "weekly"

And if you’re already using Dependabot, you should already be seeing the improvements. Faster updates, fewer failures, and clearer error messages—all without changing a single line of configuration.

The rewrite demonstrates how modern dependency management should work: fast, accurate, and transparent. By leveraging the same tools that developers use every day, Dependabot can now provide an experience that feels native to the .NET ecosystem while delivering the automation and security benefits that make dependency management less of a chore.

The post The new Dependabot NuGet updater: 65% faster with native .NET appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/the-new-dependabot-nuget-updater/feed/ 12
Building a Full-Stack App with React and Aspire: A Step-by-Step Guide https://devblogs.microsoft.com/dotnet/new-aspire-app-with-react/ https://devblogs.microsoft.com/dotnet/new-aspire-app-with-react/#comments Wed, 30 Jul 2025 17:00:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57395 Discover how to build a full-stack application with React and Aspire, integrating a React front-end with an ASP.NET Core Web API and persisting data to a database.

The post Building a Full-Stack App with React and Aspire: A Step-by-Step Guide appeared first on .NET Blog.

]]>
In this post we will build we will build a TODO app from start to finish using Aspire and React. We will do this using the CLI and C# Dev Kit. The todo items will be stored in a SQLite database. The React front-end will use a Web API to handle all the interactions with the data. I’m going to be showing this with the dotnet CLI, Aspire CLI and C# Dev Kit, but you can follow along with any IDE, or editor, of your choice. The resulting app can be published to any web host which supports ASP.NET Core – including Linux containers. First let’s start with the prerequisites to ensure you have all the components needed to follow along this tutorial.

Source Code

All the code from this post can be found at sayedihashimi/todojsaspire.

Prerequisites

In this tutorial, we will walk through installing Aspire, but you should have these dependencies installed. You can learn more at Aspire Prerequisites Installing these items will not be covered in this post.

Install Aspire

For detailed instructions on getting Aspire, and its dependencies, installed visit Aspire setup and tooling. We will go through the basics here. After installing .NET 9 and the other dependencies we will install the project templates using dotnet new.

Workload Migration

As of version 9, Aspire no longer requires a separate workload installation. Use dotnet workload list to check installed workloads and dotnet workload uninstall to remove the Aspire workload.

Install the new Aspire CLI. The command below will install the tool globally and the dotnet new templates.

On Windows:

iex "& { $(irm https://aspire.dev/install.ps1) }"

On Linux, or macOS:

curl -sSL https://aspire.dev/install.sh | bash -s

After installing this tool, you can run it by executing aspire on the command line. You can explore the usage of this tool with aspire -–help. Now that we have the tools installed, let’s move on and create the Aspire app.

Create the Aspire app

Now that the machine is ready with all the prerequisites we can get started. Open an empty folder in VS Code and add a new directory named src for the source files.

Let’s create the Aspire app to start with. In VS Code open the command palette CTRL/CMD-SHIFT-P and type in New Project. See the following image.

VS Code command palette showing the New Project option highlighted

Select the Aspire Starter App template and hit enter.

VS Code command palette showing the Aspire Starter App template selection

When prompted for the project name use “TodojsAspire” and select “src” as the destination folder to follow along. I will walk you through using New Project to create the Aspire app in the video below. Alternatively, you can use dotnet new aspire-starter or aspire new aspire-starter in a terminal for the same result.

Now that the starter app has been created you should see the following in the Explorer in VS Code. In this case I added the following files before creating the project .gitattributes, .gitignore and LICENSE.

VS Code Explorer panel showing the folder structure of the newly created Aspire app

Now would be a good time to execute a build to ensure that there are no build issues. Open the command palette with CTRL/CMD-SHIFT-P and select “.NET: Build”. You can also use the Solution Explorer to perform the build if you prefer that method.

When using the Aspire Starter App template it will create a few projects including a front-end with ASP.NET Core. Since we are going to use React for the front-end, we can delete the TodojsAspire.Web project and remove any references to it in the remaining files. The easiest way to do this project is to use the Solution Explorer which comes with C# Dev Kit. After opening the Solution Explorer, right click on the TodojsAspire.Web project and select Remove. See the following image.

Solution Explorer in VS Code showing the context menu with Delete option for removing the TodojsAspire.Web project

After deleting the project we need to remove any references to it. The things that need to be removed include.

  • Project reference in TodojsAspire.AppHost
  • Update AppHost in TodojsAspire.AppHost

In the command palette you can use .NET: Remove Project Reference to delete the reference in TodojsAspire.AppHost. Then delete the following code from the AppHost.cs file in the same project.

builder.AddProject<Projects.TodojsAspire_Web>("webfrontend")
    .WithExternalHttpEndpoints()
    .WithHttpHealthCheck("/health")
    .WithReference(apiService)
    .WaitFor(apiService);

Soon we will replace these lines with what is needed to integrate the React app. You should also delete the TodojsAspire.Web folder from the src directory. After making those changes, you should do a build to ensure that nothing was missed. To start a build, open the command palette and select Task: Run Build Task and then select dotnet: build. Now that we have cleaned up the solution, we will move on to start updating the API project to expose endpoints to manage the TODO items.

Configure the Web API

To get the API project going, we will first add a model class for the TODO items, and then use dotnet scaffold to generate the initial API endpoints. Add the Todo class (Todo.cs) below to the TodojsAspire.ApiService project.

using System.ComponentModel.DataAnnotations;
namespace TodojsAspire.ApiService;

public class Todo
{
    public int Id { get; set; }
    [Required]
    public string Title { get; set; } = default!;
    public bool IsComplete { get; set; } = false;
    // The position of the todo in the list, used for ordering.
    // When updating this, make sure to not duplicate values.
    // To move an item up/down, swap the values of the position
    [Required]
    public int Position { get; set; } = 0;
}

Now that we have added the model class, we will scaffold the API endpoints with dotnet scaffold.

We can use dotnet scaffold to generate API endpoints for the Todo model. To install this tool, execute the following command.

dotnet tool install --global Microsoft.dotnet-scaffold

When using dotnet scaffold it’s easiest to cd into the project directory and then execute it from there. This tool is interactive by default, to get started execute dotnet scaffold. Make the following selections.

  • Category = API
  • Command = Minimal API
  • Project = TodojsAspire.ApiService
  • Model = Todo
  • Endpoints file name = TodoEndpoints
  • Open API Enabled = No
  • Data context class = TodoDbContext
  • Database provider = sqlite-efcore
  • Include prerelease = No

You can see the entire interaction in the following animation.

The following changes were made to the TodojsAspire.ApiService project.

  • TodoEndpoints.cs file was created with the Minimal API endpoints.
  • Program.cs was modified to; initialize the SQLite database, get the connection string from appsettings.json and to a call to map the endpoints in TodoEndpoints.
  • The project file was modified to add needed NuGet packages.
  • appsettings.json was modified to add the connection to the local db file.

Kick off another build to ensure that scaffolding has worked successfully. If you get any build errors regarding missing packages, ensure that the following packages have been installed.

  • Microsoft.EntityFrameworkCore
  • Microsoft.EntityFrameworkCore.Design
  • Microsoft.EntityFrameworkCore.Sqlite
  • Microsoft.EntityFrameworkCore.Tools
  • System.ComponentModel.Annotations

You can install packages using dotnet add package [PACKAGE NAME].

Open the new file TodoEndpoints.cs so that we can take a look. Since this is a simple app, we can simplify the URL to the API. When you have the TodoEndpoints.cs class open in VS Code, use Replace all to replace /api/ with /. The resulting class, TodoEndpoints.cs, is below.

using Microsoft.AspNetCore.Http.HttpResults;
using Microsoft.EntityFrameworkCore;
using TodojsAspire.ApiService;

public static class TodoEndpoints
{
    public static void MapTodoEndpoints(this IEndpointRouteBuilder routes)
    {
        var group = routes.MapGroup("/Todo");

        group.MapGet("/", async (TodoDbContext db) =>
        {
            return await db.Todo.ToListAsync();
        })
        .WithName("GetAllTodos");

        group.MapGet("/{id}", async Task<Results<Ok<Todo>, NotFound>> (int id, TodoDbContext db) =>
        {
            return await db.Todo.AsNoTracking()
                .FirstOrDefaultAsync(model => model.Id == id)
                is Todo model
                    ? TypedResults.Ok(model)
                    : TypedResults.NotFound();
        })
        .WithName("GetTodoById");

        group.MapPut("/{id}", async Task<Results<Ok, NotFound>> (int id, Todo todo, TodoDbContext db) =>
        {
            var affected = await db.Todo
                .Where(model => model.Id == id)
                .ExecuteUpdateAsync(setters => setters
                .SetProperty(m => m.Title, todo.Title)
                .SetProperty(m => m.IsComplete, todo.IsComplete)
                .SetProperty(m => m.Position, todo.Position)
        );

            return affected == 1 ? TypedResults.Ok() : TypedResults.NotFound();
        })
        .WithName("UpdateTodo");

        group.MapPost("/", async (Todo todo, TodoDbContext db) =>
        {
            db.Todo.Add(todo);
            await db.SaveChangesAsync();
            return TypedResults.Created($"/Todo/{todo.Id}",todo);
        })
        .WithName("CreateTodo");

        group.MapDelete("/{id}", async Task<Results<Ok, NotFound>> (int id, TodoDbContext db) =>
        {
            var affected = await db.Todo
                .Where(model => model.Id == id)
                .ExecuteDeleteAsync();

            return affected == 1 ? TypedResults.Ok() : TypedResults.NotFound();
        })
        .WithName("DeleteTodo");
    }
}

This file contains the CRUD methods which are needed to support reading/writing the content from the database. In the front-end that we will create soon, we want to give the user the ability to move tasks up/down in the list. There are lots of different ways to implement this. Since this is a simple todo app for a single user, we don’t need to worry about having a large number of items. To keep it simple, we will add two new endpoints; MoveTaskUp and MoveTaskDown. The code for these endpoints are below, add it below the last endpoint in the TodoEndpoints class.

// Endpoint to move a task up in the list
group.MapPost("/move-up/{id:int}", async Task<Results<Ok, NotFound>> (int id, TodoDbContext db) =>
{
    var todo = await db.Todo.FirstOrDefaultAsync(t => t.Id == id);
    if (todo is null)
    { return TypedResults.NotFound(); }

    // Find the todo with the largest position less than the current todo
    var prevTodo = await db.Todo
        .Where(t => t.Position < todo.Position)
        .OrderByDescending(t => t.Position)
        .FirstOrDefaultAsync();

    if (prevTodo is null)
    { return TypedResults.Ok(); }

    // Swap positions
    (todo.Position, prevTodo.Position) = (prevTodo.Position, todo.Position);
    await db.SaveChangesAsync();
    return TypedResults.Ok();
})
.WithName("MoveTaskUp");

// Endpoint to move a task down in the list
group.MapPost("/move-down/{id:int}", async Task<Results<Ok, NotFound>> (int id, TodoDbContext db) =>
{
    var todo = await db.Todo.FirstOrDefaultAsync(t => t.Id == id);
    if (todo is null)
    { return TypedResults.NotFound(); }

    // Find the todo with the smallest position greater than the current todo
    var nextTodo = await db.Todo
        .Where(t => t.Position > todo.Position)
        .OrderBy(t => t.Position)
        .FirstOrDefaultAsync();

    if (nextTodo is null)
    { return TypedResults.Ok(); } // Already at the bottom or no next todo

    // Swap positions values
    (todo.Position, nextTodo.Position) = (nextTodo.Position, todo.Position);
    await db.SaveChangesAsync();
    return TypedResults.Ok();
})
.WithName("MoveTaskDown");

MoveTaskUp will find the task with a next lower position, and then swaps the position values. This line of code (todo.Position, prevTodo.Position) = (prevTodo.Position, todo.Position); uses tuple assignment to swap the position values in a single line of code.

Configure the database

Now that we have all the database related code ready, we need to create an EF migration. After we create the migration we will integrate the database with the Aspire dashboard.

To create the EF migration, open the terminal in VS Code, cd into the TodojsAspire.ApiService project directory (src/TodojsAspire.ApiService). Then execute the following command.

  • dotnet ef migrations add TodoEndpointsInitialCreate

The migrations command will generate a new migration named TodoEndpointsInitialCreate and add it to the project. At this time you would typically also run dotnet ef database update but that isn’t needed in this case. We will configure the project to run migrations when it is started by the AppHost. Let’s configure the database in the AppHost now.

For SQLite support in the AppHost, we will need to use the Aspire Community Toolkit. Execute the command below in the “src” folder to install SQLite support in the AppHost.

aspire add sqlite

Follow the prompts to add the package. This will add a PackageReference to the AppHost and make other APIs available for the builder.

Open the AppHost.cs file in the TodojsAspire.AppHost project. Replace the contents with the code below.

var builder = DistributedApplication.CreateBuilder(args);

var db = builder.AddSqlite("db")
    .WithSqliteWeb();

var apiService = builder.AddProject<Projects.TodojsAspire_ApiService>("apiservice")
    .WithReference(db)
    .WithHttpHealthCheck("/health");

builder.Build().Run();

In AppHost.cs we have added a SQLite database and registered the API service. We called WithReference(db) on the API so that it gets the connection string to the database.

To configure the ApiService we will need to add the package CommunityToolkit.Aspire.Microsoft.EntityFrameworkCore.Sqlite and update the connection to the database. In a terminal first cd into the ApiService project and execute the command below.

dotnet add package CommunityToolkit.Aspire.Microsoft.EntityFrameworkCore.Sqlite

Modify the Program.cs in the Api project to have the following contents.

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);

builder.AddSqliteDbContext<TodoDbContext>("db");

// Add service defaults & Aspire client integrations.
builder.AddServiceDefaults();

// Add services to the container.
builder.Services.AddProblemDetails();

// Learn more about configuring OpenAPI at https://aka.ms/aspnet/openapi
builder.Services.AddOpenApi();

var app = builder.Build();

// Configure the HTTP request pipeline.
app.UseExceptionHandler();

if (app.Environment.IsDevelopment())
{
    app.MapOpenApi();
}

app.MapDefaultEndpoints();

app.MapTodoEndpoints();

using var scope = app.Services.CreateScope();
var dbContext = scope.ServiceProvider.GetRequiredService<TodoDbContext>();
await dbContext.Database.MigrateAsync();

app.Run();

The most important changes here are that we changed how the database is being initalized. Previously the connection string was coming from the appsettings.json file from the API project, it’s now being injected with builder.AddSqliteDbContext<TodoDbContext>("db"). You should remove the connection string from the appsettings.json file now. At the bottom of Program.cs we have added await dbContext.Database.MigrateAsync() to ensure that the database is up-to-date when the AppHost starts the API project. We will now move on to try out the Web API to ensure there are no issues.

Exercise the API to ensure it’s working as expected

Now that we have all the endpoints that we need, it’s time to test this out. To test this we will add an HTTP file. For HTTP file support in VS Code, you’ll need to add an extension. There are several that you can pick from, including REST Client and httpYac. Either of those will work for our needs. For this tutorial, I’ll show it with the REST Client, but the experience with httpYac is very similar and you should be able to follow along. To install that use the Extensions tab in VS Code and type in “REST Client” in the search box, then click Install. See the next image.

VS Code Extensions panel showing the REST Client extension ready for installation

In the TodojsAspire.ApiService project open the file named TodojsAspire.ApiService.http. If your project doesn’t have a file with that name, create a new one. The name of the HTTP file doesn’t matter; you can name it whatever you like. Before we start writing any requests in the HTTP file, run the app. To start the app, you have a few options when using C# Dev Kit. You can use the Run and Debug tab in VS Code; you can use Start Debugging (F5) or Start without Debugging (CTRL-F5). In this case we don’t need to debug so we can use the keyboard shortcut CTRL-F5 to Start without Debugging and choose App Host [default configuration]. You should have a .cs file opened in the VS Code editor when invoking that gesture. That will ensure that you get the right options from VS Code. When you are prompted to select the launch configuration, choose the AppHost project. This will start the Aspire Dashboard and it will automatically startup the ApiService as well.

For detailed info on the dashboard, see this article Aspire dashboard overview – Aspire | Microsoft Learn. We will go over the basics here. In the Aspire dashboard. Below I’ve copied the key features from the dashboard article.

Key features of the dashboard include:

The dashboard will show the projects which have been configured and their status. You can easily navigate to the app, view logs and other important info. This dashboard currently shows the ApiService project, the SQLite database and a web interface to interact with the database. Later when we add the React app, it will appear in the dashboard as well. See the screenshot below.

Aspire dashboard showing the ApiService and SQLite database components with their status and endpoints

In the screenshot above, you can see the URLs for the ApiService project. Copy one of the URLs for the ApiService project, we will need that to exercise the app. You can click on the URL for db-sqliteweb to open a web interface to interact with the database, but that isn’t needed for this tutorial.

By default, when you start the AppHost, you will get a new database and the migration(s) will automatically be applied to the database to update it. If you want your local data to persist you can override this in AppHost by specifying a specific connection string to be used. Now let’s move on to create an HTTP file to ensure that the endpoints work as expected.

Below is the HTTP file, you may need to update the base url variable on the first line to match your project. For more info on HTTP file see the REST Client documentation or Use .http files in Visual Studio 2022 | Microsoft Learn (note: some of the features described aren’t supported outside of Visual Studio 2022).

@todoapibaseurl = https://localhost:7473

GET {{todoapibaseurl}}/Todo/

###

# Create a new todo
POST {{todoapibaseurl}}/Todo/
Content-Type: application/json

{
  "title": "Sample Todo2",
  "isComplete": false,
  "position": 1
}

###
POST {{todoapibaseurl}}/Todo/
Content-Type: application/json

{
  "title": "Sample Todo2",
  "isComplete": false,
  "position": 2
}
###
POST {{todoapibaseurl}}/Todo/
Content-Type: application/json

{
  "title": "Sample Todo3",
  "isComplete": false,
  "position": 3
}

###
PUT {{todoapibaseurl}}/Todo/1
Content-Type: application/json

{
  "id": 1,
  "title": "Updated Todo",
  "isComplete": true,
  "position": 20
}

###

POST {{todoapibaseurl}}/Todo/
Content-Type: application/json

{
  "title": "Sample Todo no position",
  "isComplete": false
}
###

# Delete a todo
DELETE {{todoapibaseurl}}/Todo/1

###

POST {{todoapibaseurl}}/Todo/move-up/3
###

When you paste the value for the API URL make sure to remove the trailing slash.

With this HTTP file we can exercise the app. It includes requests for most endpoints in the TodoEndpoints class. You can execute the requests with Send Request above the URL line. You can also use Rest Client: Send Request in the command palette. Try out the different requests to make sure things are working correctly. Remember that the database will be wiped out when the app is restarted, so you don’t need to worry about adding this data. When working with this file I noticed two issues what should be addressed.

  • When Todo items are returned, they are not sorted by Position.
  • When a Todo item is POSTed without a position, the value for position will be assigned to 0.

To fix the first issue, specifically group.MapGet("/",, update the get endpoint to have the following code.

group.MapGet("/", async (TodoDbContext db) =>
{
    return await db.Todo.OrderBy(t => t.Position).ToListAsync();
})
.WithName("GetAllTodos");

To fix the issue regarding the missing position value, update the POST method to have the following code.

group.MapPost("/", async (Todo todo, TodoDbContext db) =>
{
    if (todo.Position <= 0)
    {
        // If position is not set, assign it to the next available position
        todo.Position = await db.Todo.AnyAsync()
            ? await db.Todo.MaxAsync(t => t.Position) + 1
            : 1; // Start at position 1 if no todos exist
    }
    db.Todo.Add(todo);
    await db.SaveChangesAsync();
    return TypedResults.Created($"/Todo/{todo.Id}", todo);
})
.WithName("CreateTodo");

With this change, when a Todo item is submitted without a value for Position, the value for Position will be set to the max value of Position in the database + 1. Now we have everything that we need for the API, we will move on to start the JS front-end.

Build the React front-end

To create the React project we will use the npm command which is installed with node. Visit Node.js — Download Node.js® to get it installed. We will use vite as the front-end build tool.

Open a terminal, cd into the src directory and then execute the command below.

npm create vite@latest todo-frontend -- --template react

When prompted specify the following values.

  • Framework = React
  • Variant = JavaScript

This will create a new folder named todo-frontend in the src directory and then scaffold the React app into that folder. After the app has been scaffolded, npm will tell you to execute the following commands to initialize the app.

  • cd todo-frontend
  • npm install
  • npm run dev

These commands will install the dependencies and run the app to ensure that there are no issues. If you encounter and error, delete the todo-frontend folder and try again. You can use CTRL-C to exit the app after you execute npm run dev. Now that we have a working front-end, let’s integrate it with the AppHost. We will do that with the Aspire CLI.

We will use the Aspire CLI to help us integrate the front-end with the AppHost. We will install the node integration package in the AppHost project. Aspire integrations are NuGet packages that bootstrap config for you, and the Aspire CLI streamlines acquisition of them. Execute the commands below in the src directory. This will add the package Aspire.Hosting.NodeJs into the AppHost project. It will enable some new extensions methods. Open up the AppHost.cs file in the TodojsAspire.AppHost.

aspire add nodejs

Follow the prompts to add the package.

We will add a Community Toolkit package to add Vite support. Execute the command below.

aspire add ct-extensions

When prompted select ct-extensions (CommunityToolkit.Aspire.Hosting.NodeJS.Extensions).

project. Add the following to that file before builder.Build().Run();.

builder.AddViteApp(name: "todo-frontend", workingDirectory: "../todo-frontend")
    .WithReference(apiService)
    .WaitFor(apiService)
    .WithNpmPackageInstallation();

This will add the front-end as an app in AppHost project and add integration with the dashboard. Now we need to configure the front-end to consume the port that the AppHost selects for the app. Open the vite.config.js file in the todo-frontend folder. Replace the existing content with the following.

import { defineConfig, loadEnv } from 'vite'
import react from '@vitejs/plugin-react'

export default defineConfig(({ mode }) => {
  const env = loadEnv(mode, process.cwd(), '');

  return {
    plugins: [react()],
    server:{
      port: parseInt(env.VITE_PORT),
      proxy: {
        // "apiservice" is the name of the API in AppHost.cs.
        '/api': {
          target: process.env.services__apiservice__https__0 || process.env.services__apiservice__http__0,
          changeOrigin: true,
          secure: false,
          rewrite: (path) => path.replace(/^\/api/, '')
        }
      }
    },
    build:{
      outDir: 'dist',
      rollupOptions: {
        input: './index.html'
      }
    }
  }
})

This will configure a proxy so that all commands are routed through the same origin, and it injects the URL for the ApiService. That’s all the changes that are needed to integrate the front-end with the AppHost. You can start the AppHost and you should see the front-end, along with the ApiService, in the dashboard.

Troubleshooting Vite.config.js load failure

If you see an error that the vite.config.js file failed to load, run npm install in the todo-frontend folder, then press the play button next to the front-end in the Aspire Dashboard. You shouldn’t need to restart the AppHost.

The dashboard should look like the following.

Aspire dashboard showing all components including the todo-frontend React app, ApiService, and SQLite database

If you click on the todo-frontend URL, you’ll see the default Vite React template in the browser. Now we can start building our front-end. I’ll walk you through all the steps needed to get this app working.

First let’s add the components that we need for the todo app, and then we will update the files needed to use those components. In the todo-frontend/src folder, add a components folder. We will start with the component for a todo item, create an empty file in that folder named TodoItem.jsx. Paste in the contents below into that file.

/**
 * TodoItem component represents a single task in the TODO list.
 * It displays the task text and provides buttons to delete the task,
 * move the task up, and move the task down in the list.
 *
 * @param {Object} props - The properties passed to the component.
 * @param {string} props.task - The text of the task.
 * @param {function} props.deleteTaskCallback - Callback function to delete the task.
 * @param {function} props.moveTaskUpCallback - Callback function to move the task up in the list.
 * @param {function} props.moveTaskDownCallback - Callback function to move the task down in the list.
 */
function TodoItem({ task, deleteTaskCallback, moveTaskUpCallback, moveTaskDownCallback }) {
  return (
      <li aria-label="task">
          <span className="text">{task}</span>
          <button
              type="button"
              aria-label="Delete task"
              className="delete-button"
              onClick={() => deleteTaskCallback()}>
              🗑
          </button>
          <button
              type="button"
              aria-label="Move task up"
              className="up-button"
              onClick={() => moveTaskUpCallback()}>
              ⇧
          </button>
          <button
              type="button"
              aria-label="Move task down"
              className="down-button"
              onClick={() => moveTaskDownCallback()}>
              ⇩
          </button>
      </li>
  );
}

export default TodoItem;

This is a basic component that will be used to display the todo item as well as elements for the actions; move up, move down and delete. We will use this component in the TodoList component that we add next. We will wire up the buttons to actions in the list component. Add a new file named TodoList.jsx in the components folder and add the following content.

import { useState, useEffect } from 'react';
import './TodoList.css';
import TodoItem from './TodoItem';

/**
 * Todo component represents the main TODO list application.
 * It allows users to add new todos, delete todos, and move todos up or down in the list.
 * The component maintains the state of the todo list and the new todo input.
 */
function TodoList() {
    const [tasks, setTasks] = useState([]);
    const [newTaskText, setNewTaskText] = useState('');
    const [todos, setTodo] = useState([]);

    const getTodo = async ()=>{
        fetch("/api/Todo")
        .then(response => response.json())
        .then(json => setTodo(json))
        .catch(error => console.error('Error fetching todos:', error));
    }

    useEffect(() => {
        getTodo();
    },[]);

    function handleInputChange(event) {
        setNewTaskText(event.target.value);
    }

    async function addTask(event) {
        event.preventDefault();
        if (newTaskText.trim()) {
            // call the API to add the new task
            const result = await fetch("/api/Todo", {
                method: "POST",
                headers: {
                    "Content-Type": "application/json"
                },
                body: JSON.stringify({ title: newTaskText, isCompleted: false })
            })
            if(result.ok){
                await getTodo();
            }
            // TODO: Add some error handling here, inform the user if there was a problem saving the TODO item.

            setNewTaskText('');
        }
    }

    async function deleteTask(id) {
        console.log(`deleting todo ${id}`);
        const result = await fetch(`/api/Todo/${id}`, {
            method: "DELETE"
        });

        if(result.ok){
            await getTodo();
        }
        // TODO: Add some error handling here, inform the user if there was a problem saving the TODO item.
    }

    async function moveTaskUp(index) {
        console.log(`moving todo ${index} up`);
        const todo = todos[index];
        const result = await fetch(`/api/Todo/move-up/${todo.id}`,{
            method: "POST"
        });

        if(result.ok){
            await getTodo();
        }
        else{
            console.error('Error moving task up:', result.statusText);
        }
    }

    async function moveTaskDown(index) {
        const todo = todos[index];
        const result = await fetch(`/api/Todo/move-down/${todo.id}`,{
            method: "POST"
        });

        if(result.ok) {
            await getTodo();
        } else {
            console.error('Error moving task down:', result.statusText);
        }
    }

    return (
    <article
        className="todo-list"
        aria-label="task list manager">
        <header>
            <h1>TODO</h1>
                <form
                    className="todo-input"
                    onSubmit={addTask}
                    aria-controls="todo-list">
                <input
                    type="text"
                    required
                    autoFocus
                    placeholder="Enter a task"
                    value={newTaskText}
                    aria-label="Task text"
                    onChange={handleInputChange} />
                <button
                    className="add-button"
                    aria-label="Add task">
                    Add
                </button>
            </form>
        </header>
        <ol id="todo-list" aria-live="polite" aria-label="task list">
            {todos.map((task, index) =>
                <TodoItem
                    key={task.id}
                    task={task.title}
                    deleteTaskCallback={() => deleteTask(task.id)}
                    moveTaskUpCallback={() => moveTaskUp(index)}
                    moveTaskDownCallback={() => moveTaskDown(index)}
                />
            )}
        </ol>
    </article>
    );
}

export default TodoList;

This component will display the list of todo items in our front-end. It fetches the todo items from the ApiService app, and all actions will be sent to that API for persistence. Notice that the fetch calls prefix the route with /api, this comes from the configuration of the proxy in vite.config.js. The moveTaskDown and moveTaskUp functions call the related endpoint in the API project. Next add a new file named TodoList.css in the components folder with the following content. The code from above already references this css file.

.todo-list {
    background-color: #1e1e1e;
    padding: 1.25rem;
    border-radius: 0.5rem;
    box-shadow: 0 0.25rem 0.5rem rgba(0, 0, 0, 0.3);
    width: 100%;
    max-width: 25rem;
}

.todo-list h1 {
    text-align: center;
    color: #e0e0e0;
}

.todo-input {
    display: flex;
    justify-content: space-between;
    margin-bottom: 1.25rem;
}

.todo-input input {
    flex: 1;
    padding: 0.625rem;
    border: 0.0625rem solid #333;
    border-radius: 0.25rem;
    margin-right: 0.625rem;
    background-color: #2c2c2c;
    color: #e0e0e0;
}

.todo-input .add-button {
    padding: 0.625rem 1.25rem;
    background-color: #007bff;
    color: #fff;
    border: none;
    border-radius: 0.25rem;
    cursor: pointer;
}

.todo-input .add-button:hover {
    background-color: #0056b3;
}

.todo-list ol {
    list-style-type: none;
    padding: 0;
}

.todo-list li {
    display: flex;
    justify-content: space-between;
    align-items: center;
    padding: 0.625rem;
    border-bottom: 0.0625rem solid #333;
}

.todo-list li:last-child {
    border-bottom: none;
}

.todo-list .text {
    flex: 1;
}

.todo-list li button {
    background: none;
    border: none;
    cursor: pointer;
    font-size: 1rem;
    margin-left: 0.625rem;
    color: #e0e0e0;
}

.todo-list li button:hover {
    color: #007bff;
}

.todo-list li button.delete-button {
    color: #ff4d4d;
}

.todo-list li button.up-button,
.todo-list li button.down-button {
    color: #4caf50;
}

This file is straightforward CSS and doesn’t need much explanation for front-end developers. Now that we have added the components, we need to update the app to work with these components. Open up the main.jsx file in the root of the todo-frontend folder. In createRoot replace “root” with main. The code should look like the following.

Update the contents of src/main.jsx in todo-frontend to the code below.

import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import './index.css'
import App from './App.jsx'

createRoot(document.querySelector('main')).render(
  <StrictMode>
    <App />
  </StrictMode>,
)

Open App.jsx and replace the content with the following.

import TodoList from "./components/TodoList"

function App() {
    return (
        <TodoList />
    )
}

export default App

Open index.css and replace the contents with the CSS below.

:root {
  font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif;
  line-height: 1.5;
  font-weight: 400;

  color-scheme: light dark;
  color: rgba(255, 255, 255, 0.87);
  background-color: #242424;
}
body {
    font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif;
    background-color: #121212;
    color: #e0e0e0;
    margin: 0;
    padding: 0;
    display: flex;
    justify-content: center;
    align-items: center;
    height: 100vh;
}

Finally, update the content of index.html to have the content below

<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/checkmark-square.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>TODO app</title>
    <link href="https://fonts.googleapis.com/css?family=Inter" rel="stylesheet">
    <script defer type="module" src="/src/main.jsx"></script>
  </head>
    <body>
      <main></main>
    </body>
</html>

Now we have updated the app and it should be working. Start the AppHost project and then click on the URL for the front-end in the dashboard. Below is a video of the app running.

Our app is now working. You can use the dashboard to view telemetry flowing automatically between the database, .NET Web API backend, and React front-end. I didn’t go into much detail on the React code here. I wrote a similar blog post for Visual Studio users which covers the React parts in more details Creating a React TODO app in Visual Studio 2022. I’ll now move on to wrap up this post.

Looking forward

Now that we have the app running locally, the next step would be to deploy this to production. You can deploy this to any web host that supports ASP.NET Core. We won’t go through that here, but we may revisit that in a future post.

Recap

In this post, we built a new Aspire app with an ASP.NET Core Web API and connected it to a React front end using JavaScript. We worked entirely from the command line and C# Dev Kit, leveraging the new Aspire CLI and dotnet scaffold to add database support with SQLite.

Feedback

For feedback on Aspire please file an issue in this repo dotnet/aspire. For feedback related to dotnet scaffold, the correct repo for issues is dotnet/Scaffolding. Feedback related to C# Dev Kit can go to microsoft/vscode-dotnettools. You can comment below as well. If you enjoy this type of content, please leave a comment below expressing your support. This will enable us to produce more posts of a similar nature.

The post Building a Full-Stack App with React and Aspire: A Step-by-Step Guide appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/new-aspire-app-with-react/feed/ 3
Aspire 9.4 is here with a CLI and interactive dashboard features https://devblogs.microsoft.com/dotnet/announcing-aspire-9-4/ https://devblogs.microsoft.com/dotnet/announcing-aspire-9-4/#comments Tue, 29 Jul 2025 18:05:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57419 Aspire 9.4 is packed with new features, integrations, and improvements

The post Aspire 9.4 is here with a CLI and interactive dashboard features appeared first on .NET Blog.

]]>
Today, we released .NET Aspire 9.4, our biggest release ever, bringing new integrations, interactive dashboard-based inputs, and a standalone, native AOT command line tool for creating and running Aspirified apps. We also published our first roadmap last week, which outlines a bunch of exciting features we want to tackle in the next 6 months. There are too many things in this release for me to cover in one blog, so I picked some of my favorites to share – you can see the rest in our What’s New docs.

⚡Just clone and “aspire run” with the new Aspire CLI

With Aspire 9.4, the Aspire CLI is officially GA and here to make your dev loop even more seamless. The CLI gives you a fast, scriptable, and consistent way to scaffold, run, and configure your apps.

This release brings the first four core commands:

  • aspire new – Choose from our set of templates to kickstart your app
  • aspire add – Add Aspire hosting integrations from anywhere in your repo
  • aspire run – Run your full app stack from any terminal or editor (or subdirectory!)
  • aspire config – View, set, and change CLI settings and feature flags – local to your repo or global to your machine

We’re also including an updated version of aspire publish, which is still in preview, and two commands in beta – exec (for executing CLI tools like database migrations) and deploy (for deploying your full stack to dev, test, or even prod environments). The two beta commands can be turned on via aspire config set – see the CLI Reference docs for more details.

The CLI is native AOT (ahead-of-time) compiled, making it super fast across different architectures. You can download the GA CLI from our install script:

Bash:

curl -sSL https://aspire.dev/install.sh | bash

Powershell:

iex "& { $(irm https://aspire.dev/install.ps1) }"

Note

You can continue using the Aspire CLI as a dotnet tool, but it will not be the AOT version. If you’re currently using the dotnet tool and would like to update to the AOT version, first uninstall it with dotnet tool uninstall -g aspire.cli.

Learn more about the Aspire CLI in our docs.

🖱 Custom dashboard interactivity

With Aspire, you can tailor the dashboard to your specific application with custom resource commands and named URLs, and the ability to hook into resource lifecycle events. Aspire 9.4 brings a user-friendly overhaul to our eventing APIs and a major new extensibility point for you to leverage – the interaction service.

With the interaction service, you can create custom UX to get user input during development while the app is running, present notifications, or ask for confirmation before running a command. The interaction service supports 5 different input types:

  • Text (Standard text input – great for passing args, creating mock data, etc)
  • SecretText (Masked text input – great for API keys, tokens, etc)
  • Number (Numeric input – great for seeding DB items, running load tests)
  • Choice (Dropdown from a set list – useful for structured inputs)
  • Boolean (Checkbox – great for toggling on things like mock data at runtime)

a screenshot of the aspire dashboard with a new modal "input request" asking for a name, password, dinner type, and number

The interaction service also works in the CLI for inputs required during publish and deploy.

Preview Feature

The interaction service is still in preview and the API may change as we refine it further. We’re looking forward to hearing your feedback about it on the Aspire GitHub!

🔠 Built-in prompting for parameters

Aspire 9.4 already leverages the new interaction service to collect any missing parameter values defined in your apphost. Instead of expecting every dev on your team to maintain their own local appsettings.development.json or .env file, Aspire will prompt for missing values in the dashboard before starting any resource that needs them. You can even customize parameter descriptions with rich markdown, so anyone who clones and runs your app has clear guidance on what values they need to provide – and then optionally save those values to their user secrets for non-source-controlled per-project config.

✨ Easy AI development with GitHub Models and Azure AI Foundry

Aspire streamlines distributed, complex app dev, and an increasingly popular example of this is AI development. If you’ve been adding agentic workflows, chatbots, or other AI-enabled experiences to your stacks, you know how difficult it is to try different models, wire them up, deploy them (and authenticate to them!) at dev time, and figure out what’s actually happening while you debug. But, AI-enabled apps are really just distributed apps with a new type of container – an AI model! – which means Aspire is perfect for streamlining this dev loop.

Aspire 9.4 includes two new AI-focused hosting integrations – GitHub Models (Preview) and Azure AI Foundry (Preview) – which let you define AI models in your apphost then run them locally or deploy models to develop against. Both integrations work seamlessly with the Azure AI Inference (Preview) client integration, so you get detailed OpenTelemetry traces and simple bootstrapping code for your client app or service implementing it.

This is all the code it takes to define, deploy, and run a new GitHub or Azure AI Foundry model in your apphost:

// AppHost.cs
var ai = builder.AddAzureAIFoundry("ai");

var embedding = ai.AddDeployment(
    name: "text-embedding",
    modelName: "text-embedding-3-small",
    modelVersion: "1",
    format: "OpenAI")
     .WithProperties(d =>
        {
            d.SkuCapacity = 20;
        });

🌐 Yet another resource with ExternalService and the updated YARP integration

One of my favorite features in 9.4 seems minor, but is a huge quality of life improvement for anyone working with external or third-party APIs. You can now use AddExternalService() to model any URL or endpoint as a resource, get health status, and reference or configure it the same as any other resource in the apphost.

var externalApi = builder.AddExternalService("resource-name", "https://api.example.com");

var frontend = builder.AddNpmApp("frontend", "../MyJSCodeDirectory")
    .WithReference(externalApi);

Many external APIs require some sort of auth or specific headers, versioning, or routes. The preview YARP integration has been updated with fluent transform APIs so you can programmatically define your config in C# with strong types, IntelliSense, and any other Aspire resource config.

🎉 Start using 9.4 today

There are so many more features, integration updates, and quality-of-life improvements that shipped in Aspire 9.4 – if I tried to cover them all, I’d be reaching Stephen Toub-level post length! You can see a comprehensive list of changes in our What’s New doc, and use them now by updating your AppHost.csproj SDK version and packages:

<!-- SDK version -->
<Sdk Name="Aspire.AppHost.Sdk" Version="9.4.0" />

<!-- NuGet package references -->
<PackageReference Include="Aspire.Hosting.AppHost" Version="9.4.0" />

I’m so proud of this release and the incredible amount of work that went into it – both from the Aspire team and our community contributors. We’re looking forward to hearing your feedback on 9.4 and what you want to see next – let us know on GitHub and our new Discord server. See you there!

The post Aspire 9.4 is here with a CLI and interactive dashboard features appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/announcing-aspire-9-4/feed/ 3
MCP C# SDK Gets Major Update: Support for Protocol Version 2025-06-18 https://devblogs.microsoft.com/dotnet/mcp-csharp-sdk-2025-06-18-update/ https://devblogs.microsoft.com/dotnet/mcp-csharp-sdk-2025-06-18-update/#comments Tue, 22 Jul 2025 17:00:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57357 The MCP C# SDK has been updated to support the latest Model Context Protocol specification (2025-06-18), bringing structured tool output, elicitation support, enhanced security, and more to .NET developers building AI applications.

The post MCP C# SDK Gets Major Update: Support for Protocol Version 2025-06-18 appeared first on .NET Blog.

]]>
The Model Context Protocol (MCP) continues to evolve, and we’re excited to announce that the MCP C# SDK now supports the latest specification version 2025-06-18. This update brings significant new capabilities to .NET developers building AI applications, including an improved authentication protocol, elicitation support, structured tool output, and support for resource links in tool responses.

Whether you’re building AI assistants, automation tools, or integrating AI capabilities into existing .NET applications, these new features will help you create more robust and secure solutions.

Here’s a rundown of the new features and how to access them with the MCP C# SDK.

Improved Authentication Protocol

The 2025-06-18 specification introduces a new authentication protocol that enhances security and flexibility for AI applications. The new protocol separates the roles of authentication server and resource server, allowing easier integration with existing OAuth 2.0 and OpenID Connect providers.

This is a large topic and has already been covered in detail in a separate blog post by Den Delimarsky, OAuth In The MCP C# SDK: Simple, Secure, Standard.

Elicitation: Interactive User Engagement

One of the most significant additions is the elicitation feature, which allows servers to request additional information from users during interactions. This enables more dynamic and interactive AI experiences, making it easier to gather necessary context before executing tasks.

Server Support for Elicitation

Servers request structured data from users with the ElicitAsync extension method on IMcpServer. The C# SDK registers an instance of IMcpServer with the dependency injection container, so tools can simply add a parameter of type IMcpServer to their method signature to access it.

The MCP Server must specify the schema of each input value it is requesting from the user. Only primitive types (string, number, boolean) are supported for elicitation requests. The schema may include a description to help the user understand what is being requested.

The server can request a single input or multiple inputs at once. To help distinguish multiple inputs, each input has a unique name.

The following example demonstrates how a server could request a boolean response from the user.

[McpServerTool, Description("A simple game where the user has to guess a number between 1 and 10.")]
public async Task<string> GuessTheNumber(
    IMcpServer server, // Get the McpServer from DI container
    CancellationToken token
)
{
    // First ask the user if they want to play
    var playSchema = new RequestSchema
    {
        Properties =
        {
            ["Answer"] = new BooleanSchema()
        }
    };

    var playResponse = await server.ElicitAsync(new ElicitRequestParams
    {
        Message = "Do you want to play a game?",
        RequestedSchema = playSchema
    }, token);

    // Check if user wants to play
    if (playResponse.Action != "accept" || playResponse.Content?["Answer"].ValueKind != JsonValueKind.True)
    {
        return "Maybe next time!";
    }

    // remaining implementation of GuessTheNumber method

Client Support for Elicitation

Elicitation is an optional feature so clients declare their support for it in their capabilities as part of the initialize request. In the MCP C# SDK, this is done by configuring an ElicitationHandler in the McpClientOptions:

McpClientOptions options = new()
{
    ClientInfo = new()
    {
        Name = "ElicitationClient",
        Version = "1.0.0"
    },
    Capabilities = new()
    {
        Elicitation = new()
        {
            ElicitationHandler = HandleElicitationAsync
        }
    }
};

The ElicitationHandler is an asynchronous method that will be called when the server requests additional information. The ElicitationHandler must request input from the user and return the data in a format that matches the requested schema. This will be highly dependent on the client application and how it interacts with the user.

If the user provides the requested information, the ElicitationHandler should return an [ElicitResult] with the action set to “accept” and the content containing the user’s input. If the user does not provide the requested information, the ElicitationHandler should return an [ElicitResult] with the action set to “reject” and no content.

Below is an example of how a console application might handle elicitation requests. Here’s an example implementation:

async ValueTask<ElicitResult> HandleElicitationAsync(ElicitRequestParams? requestParams, CancellationToken token)
{
    // Bail out if the requestParams is null or if the requested schema has no properties
    if (requestParams?.RequestedSchema?.Properties == null)
    {
        return new ElicitResult(); // New ElicitResult with default Action "reject"
    }

    // Process the elicitation request
    if (requestParams?.Message is not null)
    {
        Console.WriteLine(requestParams.Message);
    }

    var content = new Dictionary<string, JsonElement>();

    // Loop through requestParams.requestSchema.Properties dictionary requesting values for each property
    foreach (var property in requestParams.RequestedSchema.Properties)
    {
        if (property.Value is ElicitRequestParams.BooleanSchema booleanSchema)
        {
            Console.Write($"{booleanSchema.Description}: ");
            var clientInput = Console.ReadLine();
            bool parsedBool;
            if (bool.TryParse(clientInput, out parsedBool))
            {
                content[property.Key] = JsonSerializer.Deserialize<JsonElement>(JsonSerializer.Serialize(parsedBool));
            }
        }
        else if (property.Value is ElicitRequestParams.NumberSchema numberSchema)
        {
            Console.Write($"{numberSchema.Description}: ");
            var clientInput = Console.ReadLine();
            double parsedNumber;
            if (double.TryParse(clientInput, out parsedNumber))
            {
                content[property.Key] = JsonSerializer.Deserialize<JsonElement>(JsonSerializer.Serialize(parsedNumber));
            }
        }
        else if (property.Value is ElicitRequestParams.StringSchema stringSchema)
        {
            Console.Write($"{stringSchema.Description}: ");
            var clientInput = Console.ReadLine();
            content[property.Key] = JsonSerializer.Deserialize<JsonElement>(JsonSerializer.Serialize(clientInput));
        }
    }

    // Return the user's input
    return new ElicitResult
    {
        Action = "accept",
        Content = content
    };
}

Structured Tool Output

Another important addition in the 2025-06-18 spec is support for structured tool output. Previously, tool results were allowed to contain structured data but the host/LLM had to perform the parsing and interpretation without any guidance from the tool itself. Now, tools can return structured content that is explicitly defined, allowing AI models to better understand and process the output.

The C# SDK supports this by allowing tools to specify that their output is structured, with the UseStructuredContent parameter of the McpServerTool attribute.

[McpServerTool(UseStructuredContent = true), Description("Gets a list of structured product data with detailed information.")]
public static List<Product> GetProducts(int count = 5)

The C# SDK will generate a JSON schema for the tool’s output based on the return type of the method and will include this schema in the tool’s metadata. Here is an example of the response to a tools/list call that shows the output schema for the get_products tool:

{
  "result": {
    "tools": [
      {
        "name": "get_products",
        "description": "Gets a list of structured product data with detailed information.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "count": {
              "type": "integer",
              "default": 5
            }
          }
        },
        "outputSchema": {
          "type": "object",
          "properties": {
            "result": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "id": {
                    "description": "Unique identifier for the product",
                    "type": "integer"
                  },
                  "name": {
                    "description": "Name of the product",
                    "type": "string"
                  },
...

And when the tool is called, the tool response will include the structured output in the result.structuredContent field:

{
  "result": {
    "content": [
      {
        "type": "text",
        "text": "<text content>"
      }
    ],
    "structuredContent": {
      "result": [
        {
          "id": 1,
          "name": "Laptop Pro",
          "description": "High-quality laptop pro for professional use",
          "price": 278,
          "category": "Electronics",
          "brand": "TechCorp",
          "inStock": 24,
          "rating": 4.3,
          "features": [
            "Durable construction",
            "Modern design",
            "Easy to use"
          ],
          "specifications": {
            "Weight": "1 lbs",
            "Dimensions": "12x12x2 inches",
            "Warranty": "2 years"
          }
        },
        ...
      ]
    }
  },
  "id": 2,
  "jsonrpc": "2.0"
}

Resource Links in Tool Results

Tools can now include resource links in their results, enabling better resource discovery and navigation. This is particularly useful for tools that create or manage resources, allowing clients to easily access and interact with those resources.

In the following example, a tool creates a resource with a random value and returns a link to this resource:

[McpServerTool]
[Description("Creates a resource with a random value and returns a link to this resource.")]
public async Task<CallToolResult> MakeAResource()
{
    int id = new Random().Next(1, 101); // 1 to 100 inclusive

    var resource = ResourceGenerator.CreateResource(id);

    var result = new CallToolResult();

    result.Content.Add(new ResourceLinkBlock()
    {
        Uri = resource.Uri,
        Name = resource.Name
    });

    return result;
}

Schema Improvements

Beyond the major features, several schema improvements enhance the developer experience:

Enhanced Metadata Support

The _meta field is now available on more interface types, providing better extensibility:

public class CustomTool : Tool
{
    public ToolMetadata Meta { get; set; } = new()
    {
        ["version"] = "1.0.0",
        ["author"] = "Your Name",
        ["category"] = "data-analysis"
    };
}

Human-Friendly Titles

Tools, Resources, and Prompts all now support separate name and title fields.

In the MCP C# SDK, you can specify a title for your tool using the Title property of the McpServerTool attribute.

[McpServerToolType]
public class EchoTool
{
    [McpServerTool(Name = "echo", Title = "Echo Tool")]
    [Description("Echoes the message back to the client.")]
    public static string Echo(string message) => $"Echo: {message}";
}

This produces the following tool metadata in the tools/list response:

"tools": [
  {
    "name": "echo",
    "title": "Echo Tool",
    "description": "Echoes the message back to the client.",
    "inputSchema": {
      "type": "object",
      "properties": {
        "message": {
          "type": "string"
        }
      },
      "required": [
        "message"
      ]
    },

The name and title parameters of the McpServerTool attribute are optional. If not specified, the name defaults to the lower snake case form of the method name and the title defaults to an empty string.

Getting Started with the Updated SDK

To start using these new features, update your MCP C# SDK package:

dotnet add package ModelContextProtocol --prerelease

When implementing these new capabilities, consider the following best practices:

  • Always implement proper OAuth flows for production applications
  • Use resource indicators to prevent token misuse
  • Validate all elicited user input
  • Follow the security best practices outlined in the specification

What’s Next

The MCP ecosystem continues to grow, and we’re committed to keeping the C# SDK up-to-date with the latest specification changes.

The MCP C# SDK is open source and we welcome contributions! Whether you’re reporting bugs, suggesting features, or contributing code, your involvement helps make the SDK better for everyone.

Summary

The MCP C# SDK’s support for protocol version 2025-06-18 brings powerful new capabilities to .NET developers building AI applications. With the new authentication protocol, elicitation support, structured tool output, and support for resource links in tool results, you can create more sophisticated and secure AI integrations than ever before.

Start exploring these new features today by updating your SDK and reviewing the updated documentation. The future of AI application development with .NET just got brighter!

The post MCP C# SDK Gets Major Update: Support for Protocol Version 2025-06-18 appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/mcp-csharp-sdk-2025-06-18-update/feed/ 3
Ask Mode vs Agent Mode – Choosing the Right Copilot Experience for .NET https://devblogs.microsoft.com/dotnet/ask-mode-vs-agent-mode/ https://devblogs.microsoft.com/dotnet/ask-mode-vs-agent-mode/#comments Mon, 21 Jul 2025 17:05:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57333 GitHub Copilot Chat offers two powerful modes, Ask Mode and Agent Mode, that can dramatically accelerate your development. Learn when to use each mode to get the most out of this tool.

The post Ask Mode vs Agent Mode – Choosing the Right Copilot Experience for .NET appeared first on .NET Blog.

]]>
As a .NET developer, productivity and problem-solving are at the heart of your workflow. GitHub Copilot Chat offers two powerful modes, Ask Mode and Agent Mode, that can dramatically accelerate your development, but knowing when to use each is essential to getting the most out of this tool. In this post, we’ll break down the differences, help you understand the strengths of each mode, and provide concrete examples tailored to common .NET scenarios.

Understanding Ask Mode

Ask Mode is your go-to setting when you need quick, conversational support—think of it as asking an experienced developer for advice, troubleshooting, or code samples. In ask mode, Copilot Chat doesn’t directly interact with your workspace files; instead, it provides responses based on the context you provide.

Screenshot of GitHub Copilot Chat in Ask Mode showing the interface in Visual Studio

This mode is best suited for:

  • Getting explanations or clarifications about C#/.NET concepts
  • Requesting code snippets for specific tasks
  • Learning best practices or design patterns
  • Asking for documentation summaries

Example Scenarios and Prompts for Ask Mode:

  • “Can you explain the difference between Task and ValueTask in C#?”
  • “Show me an example of dependency injection in ASP.NET Core.”
  • “What is the best way to implement logging in a .NET 8 Web API?”
  • “Summarize the IDisposable pattern in .NET.”
  • “How do I use LINQ to group a list of objects by property?”

Ask Mode is perfect when you’re exploring concepts or looking for quick code reference without needing Copilot to analyze or manipulate your actual project files.

Understanding Agent Mode

Agent Mode takes things a step further by allowing Copilot Chat to act as an intelligent agent within your codebase. Here, Copilot can reason about your actual project files, execute commands, make edits, and even help refactor or generate new code directly in your solution.

Screenshot of GitHub Copilot Chat in Agent Mode showing the interface in Visual Studio

Agent Mode is best for:

  • Refactoring existing code in your solution
  • Generating tests for your methods or classes
  • Automating repetitive tasks (updating namespaces, renaming variables, etc.)
  • Finding and fixing bugs based on your project’s actual structure
  • Performing code analysis based on your codebase context

Example Scenarios and Prompts for Agent Mode:

  • “Refactor the selected method to use async/await.”
  • “Generate unit tests for MyService in the current project.”
  • “Find all uses of the obsolete method ‘CalculateTax’ and update them to use ‘ComputeTax’.”
  • “Identify possible null reference exceptions in this file and suggest fixes.”
  • “Add XML documentation to all public methods in this class.”

In Agent Mode, Copilot becomes a coding partner working within your actual codebase, helping you automate tasks, maintain quality, and speed up development.

Choosing the Right Mode: A Summary

  • Use Ask Mode when you want to learn, ask general C#/.NET questions, or need code samples that aren’t tied to your specific project files.
  • Use Agent Mode when you want Copilot to interact with, analyze, or modify your actual codebase—such as refactoring code, generating tests, or making bulk updates.
Feature Ask Mode Agent Mode
Workspace Scope Current file & selection Entire workspace
Primary Use Learning & guidance Code analysis & modification
Response Speed Fast May take longer (analyzes workspace)
Code Changes Provides suggestions Can make direct edits
Context Awareness Active file & selection Multi-file project context
Best For Conceptual questions Refactoring & automation

Pro Tip

When in doubt, start with Ask Mode. If you realize that your request requires context from your actual files or needs workspace edits, switch to Agent Mode for a seamless transition.

Conclusion

Mastering when to use Ask Mode versus Agent Mode in GitHub Copilot Chat will make you a more powerful and efficient .NET developer. Whether you’re seeking instant expertise or practical hands-on help in your codebase, Copilot Chat adapts to your needs—putting the right knowledge and capabilities at your fingertips.

Try experimenting with both modes on your next .NET project and watch your productivity soar!

The post Ask Mode vs Agent Mode – Choosing the Right Copilot Experience for .NET appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/ask-mode-vs-agent-mode/feed/ 2
Building Your First MCP Server with .NET and Publishing to NuGet https://devblogs.microsoft.com/dotnet/mcp-server-dotnet-nuget-quickstart/ https://devblogs.microsoft.com/dotnet/mcp-server-dotnet-nuget-quickstart/#comments Tue, 15 Jul 2025 20:00:00 +0000 https://devblogs.microsoft.com/dotnet/?p=57309 Learn how to create a Model Context Protocol (MCP) server using .NET 10 and publish it to NuGet — making AI capabilities discoverable and reusable across the ecosystem.

The post Building Your First MCP Server with .NET and Publishing to NuGet appeared first on .NET Blog.

]]>
Want to extend AI assistants with custom capabilities? In this post, we’ll show you how to build a Model Context Protocol (MCP) server using .NET 10 and publish it to NuGet — making your AI tools discoverable and reusable by the entire .NET community. We’ll also show you some new features we’ve added to .NET 10 and NuGet to support this, and a new MCP Server project template that makes it easier to get started!

Building MCP Servers with .NET 10

✨ Intro: What’s the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect to external data sources and tools. Think of it as a bridge between AI models and the real world — letting assistants access databases, APIs, file systems, and custom business logic.

With .NET 10 and the new MCP templates, you can create powerful servers that extend AI capabilities — and now publish them to NuGet for the entire .NET community to discover and use!

🚀 NuGet: .NET MCP Servers Available on NuGet

Here’s the exciting part: NuGet.org now supports hosting and consuming MCP servers built with the ModelContextProtocol C# SDK. This means:

  • Discoverability: Developers can find your MCP servers through NuGet search
  • Versioning: Proper semantic versioning and dependency management
  • Easy Installation: Copy VS Code and Visual Studio MCP configuration
  • Community: Join a growing ecosystem of .NET AI tools

Search for MCP servers on NuGet.org using the MCP Server package type filter, and you’ll see what the community is building!

📦 Creating Your First MCP Server

Let’s build a simple MCP server that provides weather information and random numbers. You’ll see how easy it is to get started with the new .NET 10 MCP templates.

Prerequisites

Before we start, make sure you have:

Step 1: Install the MCP Template

First, install the MCP Server template (version 9.7.0-preview.2.25356.2 or newer):

dotnet new install Microsoft.Extensions.AI.Templates

Step 2: Create Your MCP Server Project

Create a new MCP server with the template:

dotnet new mcpserver -n SampleMcpServer
cd SampleMcpServer
dotnet build

The template gives you a working MCP server with a sample get_random_number tool. But let’s make it more interesting!

🔧 Adding Custom Tools and Configuration

Let’s enhance our MCP server with a weather tool that uses environment variables for configuration. Add a new WeatherTools.cs class to the Tools directory with the following method:

[McpServerTool]
[Description("Describes random weather in the provided city.")]
public string GetCityWeather(
    [Description("Name of the city to return weather for")] string city)
{
    // Read the environment variable during tool execution.
    // Alternatively, this could be read during startup and passed via IOptions dependency injection
    var weather = Environment.GetEnvironmentVariable("WEATHER_CHOICES");
    if (string.IsNullOrWhiteSpace(weather))
    {
        weather = "balmy,rainy,stormy";
    }

    var weatherChoices = weather.Split(",");
    var selectedWeatherIndex =  Random.Shared.Next(0, weatherChoices.Length);

    return $"The weather in {city} is {weatherChoices[selectedWeatherIndex]}.";
}

Next, update your Program.cs to include .WithTools<WeatherTools>() after the previous WithTools call.

This tool demonstrates how to:

  • Accept parameters from AI assistants
  • Use environment variables for configuration
  • Return meaningful responses

🎯 Testing Your MCP Server

Configure GitHub Copilot to use your MCP server by creating .vscode/mcp.json:

{
  "servers": {
    "SampleMcpServer": {
      "type": "stdio",
      "command": "dotnet",
      "args": [
        "run",
        "--project",
        "."
      ],
      "env": {
        "WEATHER_CHOICES": "sunny,humid,freezing,perfect"
      }
    }
  }
}

Now test it in GitHub Copilot with prompts like:

  • “What’s the weather in Seattle?”
  • “Give me a random number between 1 and 100”

Screenshot showing VS Code with MCP server tools available in GitHub Copilot

📋 Configuring for NuGet Publication

Update your .mcp/server.json file to declare inputs and metadata:

{
  "description": "A sample MCP server with weather and random number tools",
  "name": "io.github.yourusername/SampleMcpServer", 
  "packages": [
    {
      "registry_name": "nuget",
      "name": "YourUsername.SampleMcpServer",
      "version": "1.0.0",
      "package_arguments": [],
      "environment_variables": [
        {
          "name": "WEATHER_CHOICES",
          "description": "Comma separated list of weather descriptions",
          "is_required": true,
          "is_secret": false
        }
      ]
    }
  ],
  "repository": {
    "url": "https://github.com/yourusername/SampleMcpServer",
    "source": "github"
  },
  "version_detail": {
    "version": "1.0.0"
  }
}

Also update your .csproj file with a unique <PackageId>:

<PackageId>YourUsername.SampleMcpServer</PackageId>

🚀 Publishing to NuGet

Now for the exciting part — publishing to NuGet!

Step 1: Pack Your Project

dotnet pack -c Release

Step 2: Publish to NuGet

dotnet nuget push bin/Release/*.nupkg --api-key <your-api-key> --source https://api.nuget.org/v3/index.json

💡 Tip: Want to test first? Use the NuGet test environment at int.nugettest.org before publishing to production.

🔍 Discovering and Using MCP Servers

Once published, your MCP server becomes discoverable on NuGet.org:

  1. Search: Visit NuGet.org and filter by mcpserver package type
  2. Explore: View package details and copy the configuration from the “MCP Server” tab
  3. Install: Add the configuration to your .vscode/mcp.json file

Screenshot showing MCP server search results on NuGet.org

The generated configuration looks like this:

{
  "inputs": [
    {
      "type": "promptString",
      "id": "weather-choices",
      "description": "Comma separated list of weather descriptions",
      "password": false
    }
  ],
  "servers": {
    "YourUsername.SampleMcpServer": {
      "type": "stdio", 
      "command": "dnx",
      "args": [
        "YourUsername.SampleMcpServer",
        "--version",
        "1.0.0",
        "--yes"
      ],
      "env": {
        "WEATHER_CHOICES": "${input:weather-choices}"
      }
    }
  }
}

VS Code will prompt for input values when you first use the MCP server, making configuration seamless for users.

🔮 What’s Next?

With .NET 10 and NuGet as the official support for .NET MCP you’re now part of a growing ecosystem that’s transforming how AI assistants interact with the world. The combination of .NET’s robust libraries and NuGet’s package management creates endless possibilities for AI extensibility.

This is our first release of the .NET MCP Server project template, and we’ve started with a very simple scenario. We’d love to hear what you’re building, and what you’d like to see in future releases of the template. Let us know at https://aka.ms/dotnet-mcp-template-survey.

Real-World MCP Server Ideas

Here are some powerful MCP servers you could build next:

  • Enterprise Database Gateway: Safely expose SQL Server, PostgreSQL, or MongoDB queries with role-based access
  • Cloud API Orchestrator: Wrap Azure, AWS, or Google Cloud services for AI-driven infrastructure management
  • Document Intelligence Hub: Process PDFs, Word docs, and spreadsheets with OCR and content extraction
  • DevOps Command Center: Automate Git operations, CI/CD pipelines, and deployment workflows
  • Data Analytics Engine: Transform CSV files, generate reports, and create visualizations on demand

Each of these represents a unique opportunity to bridge AI capabilities with real business needs — and share your solutions with the entire .NET community through NuGet.

To go further:

.NET + MCP + NuGet = The future of extensible AI ✨

Happy building, and welcome to the growing community of MCP server creators!

📃 Resources

The post Building Your First MCP Server with .NET and Publishing to NuGet appeared first on .NET Blog.

]]>
https://devblogs.microsoft.com/dotnet/mcp-server-dotnet-nuget-quickstart/feed/ 13