[{"content":" Cover Book Pro Azure Governance\nA practical guide to designing and enforcing Azure governance at scale, covering policy, role-based access, subscriptions, security controls, and operational guardrails. Azure Strategy \u0026amp; Implementation Guide (Free)\nA strategic and technical blueprint for cloud adoption in Azure, from planning and architecture decisions to migration and operational best practices. Hands On Kubernetes on Azure (Free)\nA hands-on introduction to running Kubernetes workloads on Azure, focused on AKS fundamentals, deployment patterns, and cloud-native operations. Efficiently Migrating to Azure\nA practical migration playbook to move workloads from on-premises to Azure with clear assessment, modernization, and execution guidance. Implementing Azure OMS\nA focused guide on Azure monitoring and operations management, including log analytics, automation, alerting, and operational insights. Azure for Architects (Technical Review)\nA deep architectural reference for designing resilient Azure solutions across compute, data, security, integration, and operations. ","date":"0001-01-01T00:00:00Z","permalink":"/books/","title":"Books I (Co-)Authored"},{"content":"If you have been following me for a while, you know I\u0026rsquo;m a big fan of Azure reliability. It was the main topic I presented on several years ago (in the early days of Azure - if that sounds right?) and also mapped with a big part of my job as Azure Architect, consultant and trainer.\nI got amazed at the end of 2021 by Azure Chaos Studio, a service that allows you to inject faults against your Azure workloads (preferably production!), to make them more stable, more reliable.\nBut then came Generative AI, and Agentic InfraOps/DevOps. Welcome Azure SRE Agent, which was in public preview for a few months, but went GA earlier this week. I played with it since its early inception, and thought the GA - with a lot of cool new updates - was a good time to dedicate a blog article to it.\nIntroduction: What Is Azure SRE Agent? Modern cloud systems are increasingly distributed, dynamic, and failureâ€‘prone by design. While DevOps practices have optimized delivery velocity, operational reliability still demands significant human effort. Particularly during incident response, root cause analysis, and postâ€‘incident followâ€‘up. (Having been on the consultant side in physical and cloud environments since 1996, especially the outages and trying to fix issues is what got me into training - is what my wife says when you ask her why I love training so much, lol. She might be right\u0026hellip;)\nAzure SRE Agent is an AIâ€‘powered reliability assistant designed to automate and augment Site Reliability Engineering practices for Azure workloads. It continuously observes telemetry (metrics, logs, traces), understands Azure resource topology, correlates incidents with recent changes, and assists with, or ask human approval to execute, remediation steps.\nUnlike traditional monitoring or AI-Ops tools, Azure SRE Agent operates as an agentic system:\nIt reasons over multiple data sources simultaneously. It maintains contextual awareness of your Azure environment. It can take action via Azure CLI and REST APIs, subject to explicit approval. It integrates natively with incident management and developer workflows. In effect, Azure SRE Agent acts as a virtual SRE teammate, reducing operational toil and lowering mean time to resolution (MTTR) while preserving human oversight. (If we had agents in the early 2000\u0026rsquo;s, maybe I would still be a technical consultant instead of technical trainer, hmmmm)\nArchitecture and Core Capabilities At a high level, Azure SRE Agent combines four capability pillars:\nContinuous Observability Ingestion\nThe agent consumes signals from Azure Monitor, Log Analytics, Application Insights, and supported external observability systems to build a live understanding of system health and dependencies. The real benefit for me here, is that organizations already have everything in place. So the adoption goes smooth. And the data the agent relies on, feels familiar.\nIntelligent Diagnosis and Correlation\nWhen an alert or anomaly occurs, the agent correlates telemetry with:\nRecent deployments or configuration changes Resource topology and dependencies Historical incident patterns\nThis enables accelerated root cause analysis without manual log spelunking. (does that exist as a word?) Automated and Approvalâ€‘Gated Remediation\nAzure SRE Agent can execute operational actions. Think of scaling, restarting services, or reverting deployments. Or basically anything that relies on Azure CLI and REST APIs. All write actions are gated by RBAC and explicit approval, ensuring governance and control. (If you don\u0026rsquo;t trust the commands it suggests, don\u0026rsquo;t approve the action\u0026hellip;)\nWorkflow and Developer Tool Integration\nThe agent integrates with Azure Monitor alerts, GitHub, Azure DevOps, ServiceNow, and PagerDuty, allowing incidents to flow naturally into existing operational and engineering processes. (I have to be honest, I didn\u0026rsquo;t go that far yet to integrate with source control, probably another blog post in the near future)\nSetup and Deployment Prerequisites To deploy Azure SRE Agent, the following prerequisites must be met:\nAn active Azure subscription Permissions to assign RBAC roles (Microsoft.Authorization/roleAssignments/write) Network access to the *.azuresre.ai domain Deployment in a supported region (Preview was available in EastUS2, SwedenCentral and AustraliaEast), you might check the docs for accurate updates Note: I didn\u0026rsquo;t find any information on how to automate the deployment using bicep or az cli - have to come back to that at some point\nCreating an Azure SRE Agent In the Azure Portal, search for Azure SRE Agent. Select Create Agent. Create or select a dedicated resource group for the agent itself (I would recommend deploying this separate from application resources). Choose the region. Associate one or more resource groups to monitor.\nThe agent automatically gains visibility into all resources within those groups. Complete the deployment and wait for the agent to initialize. Once deployed, the agent exposes a chatâ€‘based interface in the Azure Portal, allowing engineers to interact using natural language to investigate and manage incidents.\nUsing Azure SRE Agent After the baseline deployment of the agent, it\u0026rsquo;s nothing more than running prompts. Using natural language, asking generic or more-specific questions, and off it goes :)\nTo test this out, I deployed an Azure App Service, connecting to CosmosDB using Managed Identity. After testing the app, I removed the App Service Managed Identity to simulate the issue.\nI opened SRE Agent and asked:\ncan you investigate my app service outage\nThis is what it came back with:\nFollowed by looking into the metrics\nTo then provide a summary of the findings and observations, INCLUDING CHART VIEWS\nDetailed Root Cause Analysis\nand detailed description of what happened and Recommended actions\nIt identified the root cause being an identity problem, where the Web App could not connect to Cosmos DB.\nTo wrap it up with a Diagnosis Complete - Data Unreachable Root Cause report (in table format), including potential fix steps (Isn\u0026rsquo;t that amazing?? I think it\u0026rsquo;s just brilliant\u0026hellip;!!!)\nFrom there, it asked me if it was OK to move on and assist with fixing the problem. Using the same response I would tell when talking to a colleague, I said\nYes, go ahead and assist me with fixing this problem using the described steps\n(I\u0026rsquo;m pretty sure just saying \u0026ldquo;yes\u0026rdquo;, or \u0026ldquo;sure\u0026rdquo; or \u0026ldquo;OK\u0026rdquo; or \u0026ldquo;YOUCANDOIT\u0026rdquo; might have worked too\u0026hellip;)\nThe above screenshot was taken after the process completed, but remember the SRE Agent can only perform actions when you as the human-in-the-loop acknowledges the approval.\nSmoothly, it came back with Issue resolved. Including a summary of the steps taken\nWell done SRE Agent!!\nSummary Azure SRE Agent is - apart from GitHub Copilot - my next favorite use case for Generative AI. Having experienced the challenges of cloud workload outages myself for years, spending hours, sometimes days, digging in, gathering metrics and logs, pinpointing the root-cause,\u0026hellip; (which sort of was a lucrative business if I think back about it\u0026hellip;), I think this is an amazing service to be added to your Azure environment. Even when you don\u0026rsquo;t trust it at first (actually, why not?) to take actions, having that AI assistant next to you to help you with the investigation, the outage analysis,\u0026hellip; will be a big time-saver. Which means, your workload will be back up-and-running faster too.\nAnd I didn\u0026rsquo;t talk about the source control integration with GitHub or Azure DevOps. I didn\u0026rsquo;t mention the notifications through Outlook or Teams. I didn\u0026rsquo;t explain the expansion to other data scenarios, third-party monitoring tools such as Grafana, DataDog,\u0026hellip; damn, there will be a lot of blog posts on Azure SRE Agent in the near-future I\u0026rsquo;m afraid.\nAlso, if you want some inspiration to play with this, have a look at the Microsoft Learn lab - Optimize Azure Reliability using SRE Agent I published recently.\nIf you deployed it and use it in your environment, please let me know. Happy to hear your stories!\nCheers!!\n/Peter\n","date":"2026-03-14T09:30:00-07:00","permalink":"/post/azure-sre-agent-intro/","title":"Azure SRE Agent: Bringing Agentic AI to Site Reliability Engineering on Azure"},{"content":"I recently decided to give my 8 year old Hugo website a serious refresh. The trigger was simple: I still used the first Hugo theme I picked up 8 years ago, some content menu options didn\u0026rsquo;t actually do anything or were no longer relevant. Then I also had screenshots and other image files all over the place (5 different folder locations, duplicate image file names and alike).\nInstead of doing this modernization and cleanup manually over a few weekends, I used GitHub Copilot as an active engineering partner to accelerate the full modernization journey.\nWhere it started: search was broken in production The first issue looked small, but it exposed deeper reliability problems:\nThe search JSON endpoint existed The /search/ page in production was effectively empty The pipeline still reported success So this wasnâ€™t a single typo. It was a classic â€œgreen pipeline, broken runtime behaviorâ€ scenario.\nWith Copilot, I moved from guessing to structured troubleshooting:\nValidate generated Hugo output Compare source routing/content metadata with deployed artifacts Harden the pipeline to fail fast when critical pages are missing That immediately changed the workflow from reactive debugging to proactive validation.\nCopilot helped me modernize beyond just the bug Once search was fixed, I used the same momentum to clean up years of accumulated content and asset drift.\n1) Build and deployment reliability I updated the Azure Static Web Apps pipeline to be more explicit and defensive:\nBuild validation checks for critical output files Safer prebuilt artifact deployment behavior Better guardrails so partial site generation doesnâ€™t silently pass Result: deployment confidence went up significantly.\n1) Build and deployment reliability I updated the Azure Static Web Apps pipeline to be more explicit and defensive:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 trigger: - main pool: vmImage: \u0026#39;ubuntu-latest\u0026#39; steps: - task: UseHugoExtended@1 inputs: version: \u0026#39;latest\u0026#39; - script: hugo --minify displayName: \u0026#39;Build Hugo site\u0026#39; # ðŸ›¡ï¸ GUARDRAIL: Validate critical output files exist - script: | if [ ! -f \u0026#34;public/search/index.json\u0026#34; ]; then echo \u0026#34;ERROR: Search JSON endpoint missing!\u0026#34; exit 1 fi if [ ! -f \u0026#34;public/index.html\u0026#34; ]; then echo \u0026#34;ERROR: Homepage not generated!\u0026#34; exit 1 fi displayName: \u0026#39;Validate critical pages exist\u0026#39; # ðŸ›¡ï¸ GUARDRAIL: Check for empty or malformed search index - script: | SIZE=$(wc -c \u0026lt; public/search/index.json) if [ $SIZE -lt 100 ]; then echo \u0026#34;ERROR: Search index is suspiciously small ($SIZE bytes)!\u0026#34; exit 1 fi displayName: \u0026#39;Validate search index integrity\u0026#39; # ðŸ›¡ï¸ GUARDRAIL: Verify content pages were generated - script: | COUNT=$(find public -name \u0026#34;index.html\u0026#34; -type f | wc -l) if [ $COUNT -lt 10 ]; then echo \u0026#34;WARNING: Only $COUNT pages generated (expected more)\u0026#34; exit 1 fi displayName: \u0026#39;Verify minimum content threshold\u0026#39; - task: PublishBuildArtifacts@1 inputs: pathToPublish: \u0026#39;public\u0026#39; artifactName: \u0026#39;hugo-site\u0026#39; displayName: \u0026#39;Publish build artifacts\u0026#39; - task: AzureStaticWebApp@0 inputs: azure_static_web_apps_api_token: $(AZURE_STATIC_WEB_APPS_TOKEN) repo_token: $(GITHUB_TOKEN) action: \u0026#39;upload\u0026#39; app_location: \u0026#39;public\u0026#39; displayName: \u0026#39;Deploy to Azure Static Web Apps\u0026#39; Key safety improvements:\nâœ… Explicit validation that search index exists and has realistic content âœ… Minimum page count check to catch silent generation failures âœ… Pipeline fails fast instead of deploying broken output âœ… Clear error messages for debugging Result: deployment confidence went up significantly.\n2) Content architecture cleanup Over time, I had duplicate and legacy routes (especially around books and videos). Copilot helped audit what was truly used versus what was just historical baggage.\nI then:\nRedesigned the Books page into a cleaner 2-column layout (cover + title/description) Removed duplicate publications pages where canonical pages already existed Reviewed and cleaned aliases to keep routing intentional Result: fewer moving parts and clearer content ownership.\n3) Image and asset governance This was the biggest hidden technical debt.\nI had images spread across multiple legacy folders with overlapping filenames. That made reference checks noisy and risky. Copilot helped me run source-scoped audits, identify true usage, and avoid false positives from generated output.\nI used that to:\nMove post-related images into content/post/images Rewrite Markdown links in affected posts Handle filename collisions safely Remove unused files/folders only after reference validation Result: cleaner repository, fewer dead assets, and lower risk of accidental content breakage.\nWhat I liked most about using Copilot on an older codebase The biggest value wasnâ€™t â€œAI wrote code for me.â€ It was this:\nFaster root-cause analysis Safer bulk refactoring with validation checkpoints Less context switching for repetitive search/update tasks Better confidence to remove legacy clutter without fear For old websites, this matters a lot. Most of the work is not feature development â€” itâ€™s careful archaeology.\nPractical lessons if you want to modernize your own Hugo site If your site is aging and you donâ€™t know where to start, this sequence worked very well for me:\nFix one visible production issue first (high leverage) Add pipeline checks for critical pages/artifacts Identify canonical routes and remove duplicates Consolidate assets by usage domain (e.g., post images) Delete only after source-level reference validation Small, verified steps beat one giant risky migration every time.\nFinal thoughts This modernization started as a broken search page after switching to a new Hugo theme and ended as a full site health upgrade and removal of technical debt.\nGitHub Copilot didnâ€™t replace engineering judgment â€” it amplified it. For me, that was the real win: I could move faster and be more careful at the same time.\nIf you have an older Hugo site (or any long-running static site), this is absolutely worth doing.\nBy the way, this whole process took less than 2 hours, and about 20 prompts in a continuous conversational approach. Are you a fan of GitHub Copilot? Let me know what your coolest use case has been so far!\nCheers!!\n/Peter\n","date":"2026-02-28T00:00:00Z","permalink":"/post/how-i-used-github-copilot-to-modernize-my-8-year-old-hugo-website/","title":"How I used GitHub Copilot to modernize my 8 year old Hugo website"},{"content":"\nSyncing MCP config from VS Code to Copilot CLI in a few simple steps.\nHey awesome people,\nOver the last weeks, Iâ€™ve been jumping between VS Code and Copilot CLI a lot more than usual. One thing kept annoying me: my MCP setup was perfect in VS Code, but I had to keep tweaking pieces again in CLI.\nIf that sounds familiar, good news: if you already have MCP servers working in VS Code, you can reuse most of that setup in GitHub Copilot CLI.\nIn this post, Iâ€™ll show you the fastest way to \u0026lsquo;keep both in sync\u0026rsquo; from VS Code mcp.json to Copilot CLI mcp-config.json, with the necessary commands for both PowerShell and Bash.\nWhy this matters When you move between VS Code and terminal workflows, you really donâ€™t want to rebuild MCP config from scratch every time.\nI made that mistake once (probably multiple times, but didn\u0026rsquo;t want to exagerate too much\u0026hellip;), and it was exactly as fun as it sounds: tiny syntax differences, one invalid server name, one missing env var, and suddenly youâ€™re troubleshooting config instead of actually building. (Although I can tell you that this would be a perfect use case for GenAI GitHub Copilot, lol)\nThis guide helps you:\nkeep one consistent MCP setup style (VSCode MCP Config as the \u0026lsquo;main\u0026rsquo; source) avoid common config parsing errors in Copilot CLI \u0026lsquo;keep-in-sync\u0026rsquo; in a few minutes, even if youâ€™re new to MCP MCP in plain English MCP (Model Context Protocol) lets AI tools connect to external capabilities.\nThink of MCP servers as â€œskill pluginsâ€ for your assistant, like:\ndocumentation search GitHub actions (issues, PRs, code search) browser automation Azure DevOps operations File locations you should know VS Code MCP config is typically found at:\nWorkspace: ./mcp.json or ./.vscode/mcp.json User-level: Windows: %APPDATA%\\\\Code\\\\User\\\\mcp.json macOS: ~/Library/Application Support/Code/User/mcp.json Linux: ~/.config/Code/User/mcp.json Copilot CLI MCP config lives here on all platforms:\n~/.copilot/mcp-config.json VS Code vs Copilot CLI: the important differences The two formats are very close, but not identical:\nTop-level key\nVS Code: servers Copilot CLI: mcpServers Server ID naming rules in Copilot CLI\nallowed: letters, numbers, _, - server keys with / must be renamed VS Code inputs\nsupported in VS Code flow not used in Copilot CLI config Placeholder syntax\nVS Code often uses ${env:VAR} and ${input:name} Copilot CLI expects $VAR and supports explicit env mappings None of this is hard, but itâ€™s just different enough to break things when done manually in a hurry.\nFast path (recommended): automate conversion You can grab both scripts directly from my GitHub repo:\nhttps://github.com/petender/MCP-Clone_VSCode2CLI Quick clone commands (PowerShell, bash, zsh):\n1 2 git clone https://github.com/petender/MCP-Clone_VSCode2CLI.git cd MCP-Clone_VSCode2CLI Use one of these scripts:\nconvert-mcp-config.ps1 (Windows/macOS/Linux with pwsh) convert-mcp-config.sh (macOS/Linux with bash + python3) I personally use the PowerShell version most of the time, no surprise I\u0026rsquo;m primarily a Windows-guy, but didn\u0026rsquo;t want to assume everyone is using PowerShell nowadays. (You should though ;)\nWhat the script converts automatically servers -\u0026gt; mcpServers removes VS Code-only inputs converts ${input:name} -\u0026gt; $NAME converts ${env:VAR} -\u0026gt; $VAR adds/merges env mappings like \u0026quot;VAR\u0026quot;: \u0026quot;$VAR\u0026quot; renames invalid server IDs (for example containing /) This is exactly the part that saves the most time (and avoids the most headaches).\nPowerShell usage (convert-mcp-config.ps1) The script supports cross-platform defaults using your home path.\nEasiest mode 1 pwsh -ExecutionPolicy Bypass -File ./convert-mcp-config.ps1 This auto-discovers common input locations and writes to ~/.copilot/mcp-config.json.\nExplicit input, default output 1 2 pwsh -ExecutionPolicy Bypass -File ./convert-mcp-config.ps1 \\ -InputPath ./mcp.json Explicit input and output 1 2 3 pwsh -ExecutionPolicy Bypass -File ./convert-mcp-config.ps1 \\ -InputPath ./mcp.json \\ -OutputPath ~/.copilot/mcp-config.json Custom home path 1 2 3 pwsh -ExecutionPolicy Bypass -File ./convert-mcp-config.ps1 \\ -InputPath ./mcp.json \\ -UserHome /home/\u0026lt;user\u0026gt; Bash usage (convert-mcp-config.sh) First run:\n1 chmod +x ./convert-mcp-config.sh Easiest mode 1 ./convert-mcp-config.sh Explicit input, default output 1 ./convert-mcp-config.sh --input ./mcp.json Explicit input and output 1 ./convert-mcp-config.sh --input ./mcp.json --output ~/.copilot/mcp-config.json Custom home path 1 ./convert-mcp-config.sh --input ./mcp.json --user-home /home/\u0026lt;user\u0026gt; Bash prerequisites bash python3 Verify in Copilot CLI After conversion, open Copilot CLI and run:\n/mcp reload /mcp show If your config is valid, your servers should load without parsing errors.\nIf they donâ€™t show up right away, donâ€™t panic. In most cases itâ€™s either naming rules or env vars (both covered below).\nRequired environment variables (for the sample config) Set these before launching Copilot CLI:\nGITHUB_PERSONAL_ACCESS_TOKEN ADO_ORG ADO_DOMAIN PowerShell session example:\n1 2 3 $env:GITHUB_PERSONAL_ACCESS_TOKEN = \u0026#34;\u0026lt;your_token\u0026gt;\u0026#34; $env:ADO_ORG = \u0026#34;\u0026lt;your_ado_org\u0026gt;\u0026#34; $env:ADO_DOMAIN = \u0026#34;core\u0026#34; bash/zsh example:\n1 2 3 export GITHUB_PERSONAL_ACCESS_TOKEN=\u0026#34;\u0026lt;your_token\u0026gt;\u0026#34; export ADO_ORG=\u0026#34;\u0026lt;your_ado_org\u0026gt;\u0026#34; export ADO_DOMAIN=\u0026#34;core\u0026#34; Manual conversion checklist If you want to edit by hand, follow this sequence:\nrename servers to mcpServers remove inputs replace ${input:name} with $NAME replace ${env:VAR} with $VAR and add/update env rename server IDs that contain unsupported characters keep operational properties unchanged (type, command, args, url, version, gallery) This checklist is also useful as a review checklist in PRs.\nCommon errors and quick fixes MCP server name must only contain alphanumeric characters, underscores, and hyphens Cause: a server key still contains / (or another invalid character).\nFix example:\nmicrosoft/playwright-mcp -\u0026gt; microsoft_playwright_mcp Server starts but fails with auth/runtime errors Cause: missing or wrong environment variable values.\nFix:\nvalidate variable names exactly confirm values exist in the active shell session run /mcp reload again Also make sure you start Copilot CLI from the same shell/session where variables are set.\ncommand not found Cause: one of the required tools is missing.\nFix:\ninstall dependencies used by your config (docker, npx, uvx, etc.) Summary Most VS Code MCP config can be reused in Copilot CLI. The key differences are server ID naming, top-level key name, and placeholder handling.\nWhile one could assume that the MCP Configuration should be standardized across platforms, once I figured out the key syntax differences (GitHub Copilot for the win!), it ended up being much easier than I expected once I stopped doing it manually and automated the boring parts (GitHub Copilot for the win x2!).\nIf this helped you, feel free to share it with your team so everyone can standardize MCP config faster. And maybe give a little GitHub Star on the repo, so I know you like it ;).\nCheers!!\n/Peter\n","date":"2026-02-26T00:00:00Z","permalink":"/post/keeping-mcp-config-in-sync-between-vscode-and-cli/","title":"Keeping MCP Server config in sync between VS Code and GitHub Copilot CLI"},{"content":"In this post, I want to share my review of Azure for Developers (Third Edition) by Kamil MrzygÅ‚Ã³d, published by Packt Publishing and available on Amazon as well as other e-book subscription platforms.\nThis definitive guide focuses on creating secure, scalable Azure apps with GenAI, serverless, and DevOps pipelines, making it an essential resource for developers looking to build modern cloud-native applications on Azure with the latest technologies.\nAbout the book (from the cover) Supercharge your development career by mastering Azure\u0026rsquo;s evolving GenAI, container, and serverless capabilities to build scalable, secure applications with confidence. This third edition of Azure for Developers transforms complex cloud concepts into practical skills, guiding you through the design, deployment, and management of cloud-native solutions while eliminating infrastructure headaches.\nFully updated with Azure\u0026rsquo;s latest features, this hands-on guide helps you automate DevOps pipelines with GitHub Actions, deploy microservices using containers, and integrate generative AI via Azure OpenAI to modernize your development workflows. You will learn how to set up your environment, streamline app deployment, and implement robust service integrations using real-world best practices.\nThe final section is a game-changer for developers who want to stay ahead of the curve. It shows you how to leverage Azure\u0026rsquo;s AI and machine learning services to automate tasks, fine-tune models, and build intelligent assistants and next-generation workflows. By the end, you will have the confidence and capabilities to deliver production-grade cloud solutions that meet real-world demands and position yourself at the forefront of modern cloud development.\nWhat this book covers The book has 20 chapters, about 584 pages in total, organized in 5 different Parts:\nPart I - Setting Up Your Environment This opening section gets you started with Azure. It covers creating an Azure account and selecting the right IDE for your development needs. The section then dives into Azure CLI and Azure PowerShell, helping you choose between these powerful command-line tools and understand how to enable plugins and extensions for enhanced productivity.\nPart II - Web Applications and Workflows in Microsoft Azure This part focuses on hosting and building web solutions. It starts with Azure App Service for hosting applications, followed by Static Web Applications for developing and deploying modern web apps. The section then covers Azure Functions for serverless computing, Azure Key Vault for managing secrets and configuration securely, Logic Apps for integrating services with low-code workflows, and Durable Functions for building complex, stateful workflows.\nPart III - Containers in Microsoft Azure This section is dedicated to containerization strategies. It begins with Azure Container Registry for storing and managing container images, followed by Azure Container Instances for ad hoc workloads. The section then explores Azure Container Apps for developing microservices and concludes with hosting containers using Azure App Service, providing multiple deployment options for containerized applications.\nPart IV - Storage, Messaging, and Monitoring This comprehensive section covers the data and observability layers. It starts with Azure Storage (Tables, Queues, Files, and Blobs), followed by a deep dive into queuing mechanisms across Azure services. The section covers relational databases in Azure, and wraps up with Application Insights for monitoring your applications with embedded SDK telemetry and diagnostics.\nPart V - AI, ML, and DevOps The game-changing final section covers the cutting-edge technologies. It begins with integrating Azure OpenAI Service to add generative AI capabilities to your applications. The section then covers Azure Machine Learning for automating ML tasks and model training. The DevOps portion includes GitHub Actions for building and deploying applications to Azure with CI/CD automation, and concludes with developing, testing, and deploying Azure Logic Apps in a production environment.\nMy Personal Feedback and observations This book is exceptionally well-suited for its target audience: developers who want to build applications on Azure. Unlike most architecture-focused books, this one takes a very hands-on, practical approach with code samples, step-by-step instructions, and real-world scenarios.\nWhat impressed me most is the developer-centric perspective. Kamil doesn\u0026rsquo;t just explain what services do â€“ he shows you how to implement them, with practical code examples in the context of building actual applications. The book includes numerous code samples available on GitHub, making it easy to follow along and experiment.\nThe third edition aspect is crucial here â€“ Azure evolves rapidly, and this latest edition reflects the most current best practices and cutting-edge service offerings. The addition of GenAI integration with Azure OpenAI Service is particularly timely, showing developers how to build intelligent applications with generative AI. The coverage of serverless, container, and DevOps patterns with GitHub Actions is particularly strong, which aligns perfectly with modern cloud-native development approaches.\nI appreciate that the book covers the full development lifecycle â€“ from setting up your environment and local development tools, through building with Azure Functions and container services, to implementing CI/CD with GitHub Actions, monitoring with Application Insights, and integrating AI capabilities. This holistic view helps developers understand not just how to build features, but how to build production-ready, intelligent applications.\nThe section on Azure OpenAI Service and Machine Learning deserves special mention â€“ it\u0026rsquo;s incredibly timely and practical, showing developers how to integrate generative AI and automated ML into their applications. The coverage of container deployment options (Container Apps, Container Instances, and App Service for containers) helps clarify which service to use for different scenarios â€“ a common point of confusion. The DevOps automation with GitHub Actions chapter is also excellent, providing practical CI/CD patterns for Azure deployments.\nIf I were to identify the target personas, this book is perfect for:\nDevelopers preparing for the AZ-204 (Azure Developer Associate) certification â€“ though not explicitly an exam prep book, it covers most exam topics with practical depth Full-stack developers moving to Azure who want to understand PaaS services DevOps engineers who need to understand how applications use Azure services Anyone building cloud-native applications on Azure The review questions at the end of chapters and further reading links add significant value, allowing readers to self-assess and dive deeper into topics of interest.\nSummary \u0026ldquo;Azure for Developers\u0026rdquo; (Third Edition) is an excellent practical guide for developers who want to leverage Azure\u0026rsquo;s latest services effectively. With its focused coverage of 584 pages, hands-on approach, and developer-focused perspective, it serves both as a learning resource and a desk reference.\nThe book strikes a great balance between breadth and depth â€“ covering essential Azure services while providing enough detail to actually implement them. The emphasis on cutting-edge technologies (GenAI, serverless, containers, DevOps automation) makes it particularly relevant for building contemporary cloud applications that leverage AI and modern deployment practices.\nWhether you\u0026rsquo;re preparing for the AZ-204 certification, migrating applications to Azure, or starting a new cloud-native project, this book provides the practical knowledge you need. The code samples, step-by-step guides, and real-world scenarios make it much more than just a reference â€“ it\u0026rsquo;s a hands-on learning experience.\nHighly recommended for developers at all levels who want to build robust, scalable applications on Azure!\nPing me if you should have any additional questions.\nCheers!!\n/Peter\n","date":"2025-12-25T00:00:00Z","permalink":"/post/packt-book-review---azure-for-developers/","title":"Packt Book Review - Azure for Developers"},{"content":"I\u0026rsquo;m usually pretty excited about AI and more specifically about Generative AI, especially with Microsoft Copilot, GitHub Copilot and Microsoft Foundry. I might be biased, but outside of my professional interactions with GenAI, I\u0026rsquo;m not into all the social media hypes around it (anyone remembers the Studio Gihbli hype from summer or any similar?)\nWith a few days off for Thanksgiving here in the USA, I wanted to spend a bit more time on updating my Spotify playlists. While not always perfect, at least a few of the \u0026lsquo;recommended artists\u0026rsquo; are closely in line with the artists and songs I like.\nThe search for new sounds I always enjoyed listening to music, and exploring new artists. I remember as a kid going to the music store, buying LPs, and later on CDs. And honestly, I discovered several new great artists thanks to Spotify. Thereâ€™s a special thrill in discovering new music. For now, it started with a simple quest: I wanted to explore the country blues rock corner. Think soul, whiskey bars, gritty guitar riffs, and that raw energy that bridges the rural blues tradition with rockâ€™s rebellious spirit. It also brings back great memories to House of Blues during work travel trips.\nI actually discovered Sons of Legion recently, which offers a unique mix of authentic soul, folk and rock music, as they describe it themselves.\nSo I dove in. I let Spotifyâ€™s recommendation engine guide me, clicking â€œlikeâ€ on songs that resonated, building a playlist over the course of a week. The algorithm seemed to understand me well. The tracks had the right vibe: swampy slide guitar, stomping rhythms, smoky vocals. I was hooked.\nThe nasty suprise of discovery But then came the surprise. As I looked closer at the artists behind these songs, something I usually do to learn more about what got them into the music scene, where they are from, what other albums do they have,\u0026hellip; I realized something interesting: One of the artists was an AI-generated artist: Breaking Rust.\nYet, Spotify flagged them as verified artist.\nAt first, I thought I had stumbled upon obscure musicians from small towns or indie labels. Their names sounded authentic (), their album covers, while only showing drawings, they looked convincing, and the songs fitting perfectly in the genre. But maybe a bit too perfect? After I couldn\u0026rsquo;t find much information on the artist or the band. These â€œartistsâ€ had no social media presence, no live performances, no interviews. Instead, they were products of AI music generation platforms, uploaded to Spotify (and basically any other music streaming platform\u0026hellip;) under artificial identities.\nMy GenAI dilemma Hereâ€™s the paradox: I genuinely enjoyed the songs. They had groove, soul, and typical blues/bluesrock themes. Yet knowing they were AIâ€‘made changed my experience. I felt betrayed.\nQuestions coming to mind:\nWas I connecting with art, or just with a clever simulation? Does it matter if the music moves me, even if no human created it? What happens to real musicians struggling to get noticed when AI and AI artists flood the market with endless tracks? And the most important question: would I continue listening? And from there, would I need to do a thourough research on any new artist I get recommended or discover on my own, to check if they are real?\nMy Personal Reflection After the initial shock, I couldn\u0026rsquo;t keep listening; I removed the full playlist - and flagged to no longer play the artist.\nSome AI tracks still made it into my playlists. But I also made a conscious effort to seek out real musicians â€” artists with biographies, live shows (and buying tickets for some!!), and human voices.\nI realized that part of the joy of music discovery is not just the sound, but the story behind it. Which - for me - adds depth and meaning to the listening experience.\nAI can mimic the sound, but it canâ€™t replicate the story. And my interest for GenAI got a little punch today :/\nBtw, if you know any real, raw, blues/bluesrock artists, please let me know! I am still in search of some new groups to listen to\u0026hellip;\nCheers!!\n/Peter\n","date":"2025-11-29T00:00:00Z","permalink":"/post/spotify-disappointed-me-with-ai-artists/","title":"How Spotify disappointed me with AI Artists"},{"content":"For about a year now, I\u0026rsquo;ve been teaching a lot on GitHub Copilot as part of my Microsoft role. Our program offers 2 different learning paths, one created by the Microsoft Content developers, AZ-2007, and the other one is managed by GitHub Content team, known as GH-300.\nIf you know my approach to teaching tech a bit, which a learner in my class lately called inspiring through technology, it means I\u0026rsquo;m trying to explain as much as possible through compelling, live demos. After walking learners through different GitHub Copilot features such as documenting/explaining code, generate application code (on different development frameworks), but also Azure CLI, CI/CD pipeline, Dockerfile, YAML, JSON and alike, I usually close with Agent Mode.\nHaving showed the first time in April when GitHub Copilot was still in preview, I usually show a demo where the Agent Mode builds me an ASP.NET webapp, modifies the Welcome home page, creates some sample employee data in a json-file, which then gets displayed in a table view in the webapp. If time allows, I also ask to then migrate everything into a SQLite setup, which brings in more complexity such as Entity Framework, SQL data migration steps and interaction with Azure KeyVault, since I specify I want to run this in Azure SQL, but not allowing connection strings in my appsettings.json.\n(Now that I think about it, it might be another great blog post to write on in the near future\u0026hellip;)\nEarlier this week however, I came up with a new scenario, asking Agent Mode to develop a shooter game using Python code, bringing me back to my youth in the mid-80\u0026rsquo;s when I was playing such games on my first 486-PC.\nAgent Mode Prompt The prompt I used was this:\n1 2 3 4 5 6 7 8 9 *\u0026#34;as a kid, I played arcade games a lot. I want to build a Python app, which tests me on my shooting reflexes. Help me developing a game which does the following: 1. aks for player name input 2. bottom middle of the game screen shows a shooter 3. anywhere random on screen appears a target 4. player uses the space bar to simulate a shoot 5. calculate the time between the target appearing and player pressing the space bar to shoot 6. if that time is less than 0.3 sec, player wins, otherwise computer wins 7. show a \u0026#34;YOU WIN\u0026#34; or \u0026#34;YOU ARE TOO SLOW\u0026#34; dpeending on the outcome\u0026#34; Agent Mode Processing From here, the Agent started rolling\u0026hellip;\nConfirming with some sort of understanding what I asked for, followed by creating 3 todos:\nSet Up Python Environment Creating the Reflex Shooting Game Testing the Game This process took less than 1 minute, can you imagine? From there, it continued with providing detailed instructions on how to the game works. Next, it also had a list of features included: Time to start the game!! Which allowed me to play exactly as I asked for. When I was too slow, it would tell me\u0026hellip; And several attempts later, I finally managed to win a game!! Summary I thought after using GitHub Copilot for training our customers and showing capabilities using live demos for about 5 hours per class, as well as using it for a lot of \u0026ldquo;coding\u0026rdquo; tasks as part of my role as trainer, mainly creating more demo scenarios (see Trainer-Demo-Deploy to get an idea what that means\u0026hellip;), I thought I\u0026rsquo;d seen it all.\nYet, it keeps suprising me every single day when I try something new.\nIf I had access to this technology in the mid 80\u0026rsquo;s, I guess I would have spent more time learning about coding than playing games\u0026hellip; although this game is actually pretty addicting already. Time to wrap up this post and go play a bit more! And feel 12 years old again.\nIf you want to see some similar version of this templategame in action, head over to my github repo.\nCheers!!\n/Peter\n","date":"2025-11-29T00:00:00Z","permalink":"/post/using-github-copilot-agent-mode-to-vibe-code-a-python-shooting-game/","title":"Using GitHub Copilot Agent Mode to vibe code a Python shooting game"},{"content":"Out of my role as a Lead Technical Trainer at Microsoft, the portfolio of trainings I\u0026rsquo;m covering has heavily shifted to Azure AI and Copilot over the last few months. Still doing Azure Architecture and Developing courses as well, but not as frequent anymore. This confirms the interest we see at customers in adopting Generative AI solutions. Apart from Copilot in M365, or using Azure AI Foundry, I also started digging into Copilot Studio a lot more. Having a good background in Azure LogicApps and a bit of PowerPlatform, Copilot Studio feels quite comfortable to me.\nI\u0026rsquo;ve been working on a few Agent scenarios in Copilot Studio, which I will blog more about in the near future. One of the newer features that got my attention, is the advanced feature of HTTP Requests, which opens the door to using REST API calls to other platforms, for example Azure.\nAs you probably know, any action against Azure requires authentication, whether an interactive admin user logon, or a Service Principal application logon. Which means that - before I can trigger any actions against Azure, I first need to get the Copilot Studio Agent authenticating to Azure, using a JWT Bearer token (Azure Entra ID OAuth 2.0 Token).\nThis article walks you through the different steps and setup of the Copilot Studio Agent, to allow it to authenticate to Azure, and from there taking a possible next step against the platform.\nWhat this article covers âœ… How to create App Registration for Copilot Studio Agent in Entra ID âœ… How to generate JWT Bearer token in Copilot Studio for API authentication âœ… How to set up Microsoft Graph API authentication with Azure Entra ID OAuth 2.0 token\nCreate Entra ID App Registration for Copilot Studio Agent Any Service / Application level interaction with Azure, starts from an App Registration. This generates a Service Principal, think of it as a Service Account, which then gets linked/reused by a 3rd party application, as like in our case, Copilot Studio.\nApart from creating the Service Principal entity object, it also needs corresponding API permissions to interact with Microsoft Graph.\nThese are the steps to set all this up:\nFrom Entra ID, navigate to App Registration, and select New Registration Provide a name for the App Registration e.g. Copilot Studio Agent Demo, and leave all other default settings as-is. Click Register This generates the Service Principal, with some specific IDs you need to copy aside. The Client ID, reflecting the unique GUID of the Service Principal as well as the Tenant ID which corresponds to your Entra ID Tenant GUID. Next, the Service Principal also needs the necessary credential to authenticate, which can be set up from the Subscription or Resource Group Access Control IAM (RBAC) permissions. For my example, I specify \u0026ldquo;Read\u0026rdquo; permissions on Subscription level, as I only want to run that as a validation of my workflow actually running successfully. You could alter to any permissions your scenario requires. And to request an authentication JWT Token, we also need to pass the Client ID password, which can be generated from the Entra ID App Registration page for the newly created App Registration. Copy this secret aside, as you will need it in a later step. With all this out of the way, we have all ID information, credentials and RBAC permissions to bring into Copilot Studio. But there are a few other pieces of information we are gathering first, the Authentication API Endpoint, which is the Microsoft Login URL for our Tenant. this should look like this: 1 https://login.microsoftonline.com/\u0026lt;yourtenantID\u0026gt;/oauth2/v.2.0/token and we also need a REST API Header and Body, which are the additional parameters Copilot Studio Agent REST API action requires:\n1 2 - Header: Content-Type: application/x-www-form-urlencoded - Body: client-id=\u0026lt;yourclientid\u0026gt;\u0026amp;client_secret=\u0026lt;clientsecret\u0026gt;\u0026amp;grant_type=client_credentials\u0026amp;scope=https://graph.microsoft.com/.default where you insert the actual value of the clientid and clientsecret you copied earlier for the placeholders.\nConfigure the HTTP Request Action in Copilot Studio Navigate to Copilot Studio and open the workflow setup of your Agent.\nNavigate to Topics and Add a new topic. Select \u0026ldquo;From Blank\u0026rdquo;\nClick the + Sign below the Trigger step, and select Advanced / HTTP Request from the option menu. Complete the following fields of the HTTP Request per below overview: URL: the login-URL specified earlier: https://login.microsoftonline.com//oauth2/v.2.0/token, where the placeholder gets replaced with the actual GUID of your tenant, something like this (redacted) https://login.microsoftonline.com/1c5e3b03-f225-4622-b785-abcdefghi/oauth2/token\nMethod: POST\nHeaders and Body:\nHeaders / Key: Content-Type Headers / Value: application/x-www-form-urlencoded Body: Raw Content Content Type: application/x-www-form-urlencoded Content: client-id=c92c3f9f-7ba3-4e5b-1234-abcdefghi\u0026amp;client_secret=nzE8Q~q-tDIrvlLkBGe2IwWH.abcdefghij_\u0026amp;grant_type=client_credentials\u0026amp;scope=https://graph.microsoft.com/.default Response headers: Create new Global variable to store the value in, e.g. HTTPResponseVar Response data type: Record + select Edit Schema, and add the following schema structure: 1 2 3 4 5 kind: Record properties: access_token: String expires_in: Number token_type: String Save Response as: select the Global.HTTPResponseVar again Save the changes. Request Header details\nRequest Body details\nGlobal Variable details\nRecord Schema details\nWhile this flow should work fine now, you won\u0026rsquo;t get any output from it. We need to update the flow with a follow-up message, in which we read/present the output from the HTTPResponseVar variable. Click the + sign below the HTTP Request step in the workflow, and select Send a Message from the context menu.\nEnter an informative text, e.g. \u0026ldquo;Here is the Azure Token String\u0026rdquo;, and add the HTTPResponseVar variable into the text box, by selecting the insert variable {X} option and selecting the variable from the list.\nSave the changes. Next, from the Test your Agent pane, trigger the Agent flow by sending a short chat message, like \u0026ldquo;get my token\u0026rdquo;. This should result in the chat response, showing your message \u0026ldquo;Here is the Azure Token String\u0026rdquo;, and the actual JWT token with all necessary information in it. Cool, this works as expected! While we\u0026rsquo;re close, we\u0026rsquo;re not 100% done yet, as the value of this variable is not immediately reusable as an authentication token, as not all information in the response is part of the actually authentication token (e.g. String{\u0026ldquo;access_token\u0026rdquo;}, \u0026ldquo;expires_in\u0026rdquo;, \u0026ldquo;token_type\u0026rdquo;). We can fix this by running a concatenate formula, and splitting the received information in a new variable which only stores the actual Token information we need to authenticate. After the last message step in the flow, click the + sign again, and once more, select Send a Message. Provide a new informative message, something like \u0026ldquo;And this is the cleaned up version of the Bearer token, just what you need\u0026hellip;\u0026rdquo;, and add a new PowerFx Expression by clicking the {fX} button. Enter the following formula:\n1 Concatenate(Topic.HTTPResponseVar.token_type,\u0026#34; \u0026#34;,Topic.HTTPResponseVar.access_token) Which would transform the response into a valid Bearer token text string \u0026ldquo;Bearer ey\u0026hellip;\u0026rdquo; which you can use for any Azure HTTP REST API in a different Topic. To do that, it\u0026rsquo;s best to save the concat result in a new Global Variable. Reuse the JWT Authenticator Topic in Copilot Studio Agent While I want to keep this article on the actual JWT Token authentication process, I wanted to add a little teaser for a follow-up article, in which I create a Copilot Studio Agent to interact with Azure, relying on the Bearer Token from this Topic we just created. In any Copilot Studio flow you have, you can now refer to the Auth2Azure Authentication request Topic like this:\nSummary In this article, I wanted to document the necessary steps on how to use the Copilot Studio Agent - HTTP Request task, to get a Bearer Token to authenticate to Azure (or any similar HTTP REST API for that matter).\nCheers!!\n/Peter\n","date":"2025-08-09T00:00:00Z","permalink":"/post/generate-azure-jwt-token-in-copilot-studio/","title":"Generate Azure JWT Token in Copilot Studio"},{"content":"Over the last few months, I\u0026rsquo;ve been working on an exciting project for our Microsoft Technical Trainer team, known as \u0026ldquo;Trainer-Demo-Deploy\u0026rdquo;, a catalog of Azure end-to-end demo scenarios, available as an Open-Source project.\nWhile we managed to get about 50 templates live, there can never be enough scenarios to integrate into your Azure classes or POC activities if you ask me. One of the challenging tasks in the project is not only coming up with demo ideas, but also creating the actual artifacts, such as Azure templates with Bicep, sample apps and sample data.\nI had an Azure Site Recovery Services scenario from a few years ago, written in modular ARM templates. With Bicep providing a great way to transform your ARM to Bicep, I could have gone through each template file and convert them. Have done several of those over the last few months.\nBut out of teaching AZ-2007 Accelerate app development by using GitHub Copilot, where I integrate a - what I think amazing demo on how to use Agent mode to deploy a sample web app - I started thinking about testing if Agent Mode could help me with this transformation project.\nFact I\u0026rsquo;m dedicating a blog post to it, is mainly to confirm it worked amazingly well, as well as sharing my excitement and some steps of what the process looked like. Hopefully this post inspires you to start embracing GitHub Copilot Agent Mode into your own tasks.\nMy starting templates My original setup was pretty straightforward, having a folder \u0026ldquo;templates\u0026rdquo;, in which I have modular templates for each part of my Azure Site Recovery Vault deployment. Each templates hold a snippet of ARM / JSON structured code to deploy one or more Azure Resources.\nI opened up the folder structure in my Visual Studio Code, and opened GitHub Copilot, selecting Agent Mode. I clearly described in a prompt, what I wanted the Agent to perform as tasks. I didn\u0026rsquo;t provide much detail to be honest, as initially, I was merily experimenting to try and find out if Agent could actually help with this, or how far it would go in the process. Agent Mode Prompt The prompt I used was this: for each file in the templates folder, convert to azure bicep. create a new bicep file for each, keeping the same name as the original json file. the azuredeploy should be transformed to main.bicep. validate all pointers to all new bicep files to be correct\u0026quot;\nFrom here, the Agent started rolling\u0026hellip;\nInforming me about the different steps it would take to handle this task, starting with exploring the templates folder to see all the files that need to be converted\nFollowed by going more in-depth into each and every template JSON file\nFollowed by starting the conversion process to Bicep files. Before doing that, it also highlighted it would check the Azure deployment best practices (although I didn\u0026rsquo;t explicitly asked to do that, nice one! )\nIt felt like it learned from the best practices, by starting with the azuredeploy.json conversion to main.bicep first. It could also be that it started with this, because I mentioned this in the prompt itself. As the main.bicep conversion took a bit longer than normal - although it was only running over it for about a minute, it prompted, asking if it was ok to continue. Obviously, I confirmed to continue. From here, it nicely continued looping through all smaller json-files, and transforming them into corresponding bicep-files. Since each file typically had only 1 or 2 resource references, the conversion went really smooth. After a bit, it had finished the transformation of each ARM to Bicep, and started on updating the template links, as I asked for in my prompt, to also validate the references to all the deployment files. With all references updated, it continued with its own error checking, and validating the different templates for any possible errors. Even more interesting, without me specifying, it detected an error in the azure.yaml, which I had in my project folder, from a baseline started AZD-template we use to create all our Trainer-Demo-Deploy scenarios. Last, it also created a main.parameters.json, to capture any specific Parameters for the deployment. From here, it went back to validating the Bicep templates again, where it detected a few different issues. (I didn\u0026rsquo;t check in detail what got identified as issue, as it didn\u0026rsquo;t prompt me to validate anything on my end\u0026hellip;); based on the next informational message, it struggled with missing an output for myWorkspaceKey, in the deploy-infra.bicep file. Chewing a bit on the myWorkspaceKey problem, it managed to find its own work-around to solve the problem. It even provided a clear explaining on why, identifying the dependency on the parent template. Feeling we were close to the end of the process, it continued amazing me, as it now also created its own documentation in a BICEP_CONVERSION_SUMMARY.md Markdown file, in which it listed up what conversations it did. With all that out of the way, it ran another final validation to conclude there were no more issues, closing the task with creating another README-BICEP.md file, describing how to run the actual deployment, using AZD. Finally, the Agent provided a description within the Chat Agent window, clearly describing all the tasks accomplished with the necessary file references included: As well as adding additional details on the task validation. Finishing with describing different ways on how to run the actual deployment, using azd, Azure CLI and Azure PowerShell. Last step was running the deployment, and this worked without any hiccups! Summary As mentioned earlier, I didn\u0026rsquo;t intend to go through this process as part of writing a blog post. Yet, the fact that the GitHub Copilot Agent Mode happily suprised me once more, I wanted to share my joy and excitement about this.\nStarting from a somewhat complex JSON ARM template folder with about 10 modular arm-json files, it managed to nicely convert all of them into the new Bicep template language, with only a few minor issues throughout the process. Without asking assistance or halting the process, it ran its own troubleshooting and issue resolution, resulting in a 100% successful transformation.\nApart from the technical success of the task, what surprised me even more, is it only took the agent barely 5 minutes and only had to prompt me twice during the whole process!!\nIf you want to see this template in action, head over to my github repo and continue your Azure learning journey with more demo scenarios at Trainer-Demo-Deploy.\nCheers!!\n/Peter\n","date":"2025-07-12T00:00:00Z","permalink":"/post/using-github-copilot-agent-mode-to-transform-arm-templates-to-bicep/","title":"Using GitHub Copilot Agent Mode to transform ARM templates to BICEP"},{"content":"The Insiderâ€™s Guide to Innovation at Microsoft, written by Dean Carignan and JoAnn Garbin, and published by PostHillPress publishers, explores the innovation strategies and practices at Microsoft over the past 50 years.\nWhen I heard about this book, I ordered myself a copy during pre-order, and got excited from the day I got it in the mail. This was honestly one of the few books I read cover to finish in just a weekend.\nHere is my review.\nOverview The book is divided into two main sections: seven detailed case studies on different teams/products within Microsoft, and an analysis of four key innovation patterns, which were very inspiring to read.\nThe case studies cover various products and initiatives, including the Xbox, Visual Studio Code, Microsoft Office, Cognitive Services, Microsoft Research, Bing, and Responsible Innovation. Each case study provides insights into different aspects of innovation within Microsoft, highlighting both successes and failures.\nThe book aims to distill innovation practices that transcend specific technologies and time periods, making it a valuable resource for innovators across various industries. It emphasizes the importance of continuous improvement, collaboration, and adaptation in the innovation process.\nCase Studies Xbox Revolution: This case study explores how Microsoft entered the gaming industry and successfully launched the Xbox. It highlights the challenges faced, such as competition from established players like Sony and Nintendo, and the innovative strategies that led to Xbox\u0026rsquo;s success. For this case study specifically, it was interesting to read how the team lost the spark of innovation at some point, benefitting from the great name of the brand and the fact it was successful. Until it wasn\u0026rsquo;t anymore. Which lead to more innovation, with the Game Pass as one of the biggest successes within the team, and the whole Gaming industry.\nVisual Studio Code: This section covers the development of Visual Studio Code, a free source-code editor. It emphasizes the importance of community feedback and open-source collaboration in creating a product that meets the needs of developers worldwide. What struck me for this case study, is the fact that it not only had to be innovative as a product and development tool, but also internal competition with the big sister, Visual Studio, which was the go-to development editor, and a big money machine for years. The goal was not being a competitor, but rather an enabler for the \u0026lsquo;born on the web\u0026rsquo; developer generation. Who are not typically thinking about using Visual Studio. Knowing your target audience seemed to be the key factor for success for the team.\nMicrosoft Office: The evolution of Microsoft Office is examined, showcasing how continuous improvement and adaptation to user needs have kept it relevant and widely used over the decades. Apart from Windows, the Office brand and product suite, feels the best-known product that Microsoft released to business users, in my opinion. I honestly never thought about how innovation was key to the continuing success of the product. Also great to read that it was already integrating Artifical Intelligence in a lot of the product for years. Long before it became a hype at the end of 2023.\nCognitive Services: This case study focuses on Microsoft\u0026rsquo;s AI and machine learning initiatives, particularly Cognitive Services. It discusses the integration of AI into various products and the ethical considerations involved. Working with Azure AI Services myself more and more, by delivering training on it out of my role as Technical Trainer, but also as a fan-boy, by thinking about how to integrate AI in some of my demo apps, was interesting to understand more about the crucial steps the team had to take. I also loved hearing about the interaction across other teams within Microsoft to make this product successful.\nMicrosoft Research: The role of Microsoft Research in driving innovation is highlighted, showing how fundamental research can lead to groundbreaking products and technologies. Knowing only a little bit what this team is doing, I remember their overview from 2024, which shared a lot of detail about their global work and impact across almost anything that Microsoft is doing. Microsoft Research has made substantial contributions to AI and machine learning, including the development of large language models and smaller, task-specific models. These advancements have improved natural language processing, computer vision, and other AI capabilities. Microsoft Research has used AI to enable earlier detection and treatment of diseases like esophageal cancer, potentially improving survival rates. They have also accelerated drug discovery processes for infectious diseases. The creation of a large-scale atmospheric model has transformed weather forecasting and our ability to predict and mitigate the effects of extreme weather events. This innovation is crucial for addressing climate change and enhancing environmental sustainability.\nAnd while not touched on in the book itself, as it was not announced publicly yet, is most probably the work they did and are doing around **Majorana 1 - The worlds first quantum processor Bing: The unexpected rise of Bing in the AI space is explored, detailing the strategies that helped it become a significant player despite initial setbacks. This case study was so interesting to learn about, on different levels. First, the long and dedicated journey the team took, to grab market share, quarter by quarter, for (over) a decade. Second, that the competition actually lead to the start of innovation. As literally mentioned in the book: without Google, there wouldn\u0026rsquo;t be Bing, and it would still just be a internet search option within the Microsoft Network pages.\nResponsible Innovation: This section addresses the importance of ethical considerations in innovation, particularly in areas like AI and data privacy. I never thought of linking innovation to responsible technology. To me, it feels more like an outcome, an aspect of product design and realization. Great to understand that responsibility is often the key driver of innovation, especially nowadays with AI, and the dangers it brings to the world when misusing it.\nInnovation Patterns The book identifies four key innovation patterns that have been crucial to Microsoft\u0026rsquo;s success:\nContinuous Improvement: This pattern emphasizes the importance of ongoing refinement and enhancement of products and services. Microsoft has consistently focused on iterating and improving its offerings based on user feedback and technological advancements. This approach ensures that their products remain relevant and competitive over time. For example, the evolution of Microsoft Office showcases how continuous updates and feature enhancements have kept it a staple in productivity software.\nCollaboration: Collaboration is highlighted as a critical factor in driving innovation at Microsoft. This involves teamwork within the company as well as partnerships with external organizations, developers, and the broader tech community. The development of Visual Studio Code is a prime example, where open-source collaboration and community feedback played a significant role in shaping the product to meet the needs of developers worldwide\nAdaptation: The ability to pivot and adapt to changing market conditions and user needs is another key innovation pattern. Microsoft has demonstrated this through various initiatives, such as entering the gaming industry with the Xbox and adapting its strategies to compete with established players like Sony and Nintendo. This flexibility allows Microsoft to explore new opportunities and stay ahead in a rapidly evolving tech landscape\nPersistence: Persistence is about the determination to overcome challenges and setbacks in the pursuit of innovation. Microsoftâ€™s journey with Bing is a testament to this pattern. Despite initial setbacks and strong competition from other search engines, Microsoft persisted and eventually found success by leveraging AI and machine learning to enhance Bingâ€™s capabilities.\nThis was one of the rare times where I took a lot of notes on the side, as I discovered several interesting ideas (which should not be the 1st phase of innovation - find more by reading the book yourself what I mean by this :-) ), how I could start incorporating some of these patterns in the work I do at Microsoft.\nConclusion \u0026ldquo;The Insiderâ€™s Guide to Innovation at Microsoft\u0026rdquo; provides valuable insights into the company\u0026rsquo;s approach to innovation, offering lessons that can be applied across various industries. It emphasizes the importance of a structured yet flexible process of value creation through continuous improvement, collaboration, and adaptation.\nWhile I think the title had the goal to draw attention to the innovative aspect of what Microsoft products are about, to me, it was also interesting to read about the history of how several of the key products I work with every day, got invented, developed, and are continuously being re-invented, using a customer-centric approach.\nAs mentioned at the start of the article, I truly enjoyed reading this book. It gave me insights into the history of Microsoft and several of its key products, as well as helped me understand how challenging it is to develop these products. And especially the 2nd part of the book, which detailed the innovation patterns, felt useful to me, as - apart from the Technical Trainer role - I am regularly brainstorming and thinking about other ways to keep the trainer role exciting. Not just for myself, but also for my learners. And if there is one other thing I will remember from reading this book, is that your ideas and realizations of them always have to be customer-focused, no matter if they are external customers, partners, other teams within Microsoft or colleagues within your own team.\nI hope you enjoy reading this book as much as I did!\nCheers!!\n/Peter\n","date":"2025-03-09T00:00:00Z","permalink":"/post/innovation-at-microsoft---book-review/","title":"Innovation at Microsoft - Book Review"},{"content":"\nHey folks,\nWelcome to #AzureSpringClean, an initiative from Joe Carlyle and Thomas Thornton which celebrates its 4th edition this year. I\u0026rsquo;m thrilled to be part of this again for the 3nd time this year. My first article had security in mind, explaining the difference between Azure Service Principals and Managed Identity.\nMy 2nd article focused on understand DevSecOps, and how you can optimize security in your application deployment lifecycle, by \u0026ldquo;shifting left\u0026rdquo;. (https://www.007ffflearning.com/post/azure-spring-clean---devsecops-and-shifting-left-to-publish-secure-software/)\nWhere now for this 3rd article, where moving more towards the \u0026rsquo;end\u0026rsquo; of the traditional DevOps cycle, discussing Operations and Monitoring, by using Azure Application Insights.\nIntroduction In today\u0026rsquo;s digital age, monitoring and maintaining the health of applications is crucial for ensuring optimal performance and user satisfaction. Azure Monitor, a comprehensive monitoring solution from Microsoft, offers a suite of tools to help developers and IT professionals keep their applications running smoothly. One of the key components of Azure Monitor is Application Insights, which provides deep insights into application performance and user behavior. In this article, we\u0026rsquo;ll explore Application Insights, its features, and how it integrates with Azure Monitor to deliver a robust monitoring solution.\nOverview of Azure Monitor Azure Monitor is a powerful platform that provides end-to-end monitoring for your applications and infrastructure. It collects and analyzes telemetry data from various sources, including Azure resources, applications, and on-premises environments. Azure Monitor helps you understand how your applications are performing and proactively identifies issues affecting them. It encompasses several services, including Log Analytics, Application Insights, and Azure Monitor for VMs, among others.\nApplication Insights Overview Application Insights is an application performance management (APM) service within Azure Monitor. It is designed to monitor live applications, providing real-time insights into their performance and usage. By integrating with OpenTelemetry, Application Insights offers a vendor-neutral approach to collecting and analyzing telemetry data, enabling comprehensive observability of your applications. It supports various programming languages and frameworks, including .NET, Java, Node.js, and client-side JavaScript.\nKey Features of Application Insights End-to-End Transactions: Application Insights provides a detailed view of end-to-end transactions, allowing you to trace and diagnose issues across different components of your application4. This feature supports time scrubbing, enabling you to filter and analyze specific time periods in more detail.\nPerformance and Failures: The service offers tools to monitor performance and identify failures. It includes features like the Roles tab, which preserves role selection while navigating from the application map, and the availability tool, which helps you monitor the availability and responsiveness of your application endpoints.\nApplication Map: The application map provides a visual overview of your application\u0026rsquo;s architecture and the interactions between its components. It includes features like \u0026ldquo;Zoom to fit\u0026rdquo; and grouped nodes to make the map easier to read and navigate4.\nLive Metrics: With Live Metrics Stream, you can monitor your application\u0026rsquo;s health metrics in real-time, even while deploying changes. This feature helps you quickly identify and address issues as they arise.\nSmart Detection: Application Insights uses machine learning to detect anomalies in your application\u0026rsquo;s performance and sends alerts with embedded diagnostics. This proactive approach helps you address potential issues before they impact users.\nIntegration with Azure Services: Application Insights integrates seamlessly with other Azure services, such as Azure DevOps, Azure Kubernetes Service (AKS), and Azure Functions. This integration allows you to monitor and manage your applications using a unified platform.\nIntegration with Azure Monitor Application Insights is a core component of Azure Monitor, and its integration with other Azure Monitor services enhances its capabilities. For example, you can use Log Analytics to query and analyze telemetry data collected by Application Insights. This integration provides a comprehensive view of your application\u0026rsquo;s performance and helps you identify trends and patterns.\nAzure Monitor also offers out-of-the-box insights for various Azure resources, such as virtual machines, containers, and storage accounts. These insights are built on workbooks, which are interactive reports that you can customize to meet your specific needs. By leveraging these insights, you can gain a deeper understanding of your application\u0026rsquo;s performance and make data-driven decisions to optimize it.\nUse Cases and Benefits Proactive Monitoring: Application Insights enables proactive monitoring of your applications, helping you identify and address issues before they impact users. This proactive approach improves the overall user experience and reduces downtime.\nPerformance Optimization: By providing detailed insights into your application\u0026rsquo;s performance, Application Insights helps you identify bottlenecks and optimize your code. This optimization leads to faster and more efficient applications.\nUser Behavior Analysis: Application Insights offers tools to analyze user behavior, such as usage patterns, session durations, and user flows. This analysis helps you understand how users interact with your application and identify areas for improvement.\nCost Management: By monitoring resource usage and performance, Application Insights helps you manage costs more effectively. You can identify underutilized resources and optimize their usage to reduce costs.\nEnhanced Security: Application Insights provides insights into potential security issues, such as failed login attempts and suspicious activities. By monitoring these activities, you can enhance the security of your applications and protect sensitive data.\nSeeing it in action Now that we covered the theoretical part, let\u0026rsquo;s have a look at what all this looks like from a sample application workload perspective.\nI am using a sample app which I have been using in all my Azure Architecture (AZ-305) and Developing Azure Solutions (AZ-204) classes over the years, when talking about Application Insights. It recently got moved to a new \u0026rsquo;trainer\u0026rsquo; platform out of a project I\u0026rsquo;m leading within Microsoft, based on Azure Developer CLI deployments for trainer demo scenarios. (If you\u0026rsquo;re new to AZD, you should definitely check it out!)\nHead over to Trainer-Demo-Deploy and search for tollbooth\nSelect the Tollbooth Serverless Architecture with Azure Functions card, and follow the Template Details instructions to get it deployed. Most important is having the Scenario-specific prereqs running on your local machine, as well as having Azure Developer CLI as well.\nfrom azd up, it will ask you for your Azure subscription and the region where you want to deploy the scenario. Give it about 12-15min, and the fun can start\u0026hellip;\nAs you can see from the architecture, it is using several different services in Azure, to replicate a Tollbooth / Automated Parking Lot management application. This will generate \u0026rsquo;traffic\u0026rsquo; to be monitored through Application Insights.\nUsing this demo scenario, you showcase a solution for processing vehicle photos as they are uploaded to a storage account, using serverless technologies on Azure. The license plate data gets extracted using Azure Cognitive Service, and stored in a highly available NoSQL data store on Azure CosmosDB for exporting. The data export process will be orchestrated by a serverless Azure Functions and EventGrid-based component architecture, that coordinates exporting new license plate data to file storage using the Blob Trigger Function. Each aspect of the architecture provides live dashboard views, and more detailed information can be viewed real-time from Azure Application Insights.\nAzure App Service - Upload Images The starting point of the demo scenario, is the imageupload web application. This simulates car traffic for 500 vehicles, which should be enough to see live data dashboard views across all architecture components. Now there is a 1-2 minute delay before the metrics actually show up in the dashboards.\nNavigate to the imageupload website URL (https://%youralias%tbimageuploadapp.azurewebsites.net/)\nClick the Upload Images button - this loops to 500 Azure Storage Account Once the upload process is complete, navigate to the %youralias%datalake Azure Storage Account.\nNavigate to the Images Container; notice the different image files, generated from the web application. Feel free to select a file and download it, to show it contains a car image with a license plate. You might open different images, to showcase there are different cars (Note: in reality, we used 10 different images, looping 50 times, to generate 500 images in total)\nAzure Functions Once the images are available in Azure Storage Account, an Azure Function ProcessImage, which sends image files to Azure Cognitive Service Use this Azure Function to explain the concept of triggers (HTTP, Blob Trigger,\u0026hellip;) and how the starting point is \u0026lsquo;something happens in Blob\u0026rsquo;, which kicks off the Function.\nOnce the data comes back from Cognitive Service, it triggers the next Azure Function SavePlateData, which stores text values in Azure CosmosDB.\nEvent Grid / Topics \u0026amp; Subscriptions Notice that this Function is based on an Event Trigger, coming from Event Grid (%youralias%eventgridtopic). From the Azure Portal, navigate to Event Grid, and select Topics. Open the EventGridTopic resource**. Highlight the Event Grid Topic is related to the Event Grid Subscription called SavePlate, which triggers the actual Azure Function SavePlateData. This also clarifies the use case, where Event Grid acts as the orchestrator, watching over certain events to occur, and based on the settings of the subscription, it triggers an Azure Function process. Select the SavePlate Event Grid Subscription from the dashboard view. This opens a new dashboard, showing the hierarchy of the event: Event Grid Topic : %youralias%eventgridtopic Metrics - showing the 500 events Azure Function - SavePlateData While talking about Event Grid Subscriptions, there is actually a 2nd subscription in place, which watches over the Azure Blob Storage events. Navigate back to the Azure Storage Account %youralias%tbdatalake, and navigate to Events. Notice the Event Grid Subscription blobtopicsubscription, which is a Web Hook, meaning, it gets triggered based on HTTP requests.\nFrom within the Event Subscription detailed dashboard, showing Metrics initially, navigate to Filters. Highlight the subscription is based on filter Create Blob. This is what triggers the Azure Function, based on \u0026ldquo;a new blob is getting created\u0026rdquo;. All other events in the Storage Account are getting bypassed/neglected.\nCosmos DB Open the %youralias%cosmosdb. Navigate to Data Explorer. Show the LicensePlates Database, which has 2 different Containers NeedsManualReview (not used in this demo scenario), and Processed. The Processed Container is where the actual text information returned from Azure Cognitive Service is getting stored. Under Processed, open the Items view. This shows the different document items in the container, each document having the license plate, image file name and timestamp in a JSON document format. Application Insights Navigate to Application Insights, opening the %youralias%tbappinsights resource. Go to Live Metrics. This will show a lot of different views about the ongoing processing of Functions, Events, Storage activity and more. Note: If you see the \u0026ldquo;Demo\u0026rdquo; page, it means you don\u0026rsquo;t have live metrics (anymore), and the processing of the car images is completed already. To generate (new) live data, go back to the imageupload web app, and generate new images by pressing the \u0026ldquo;Upload images\u0026rdquo; button.\nFrom within the base charts, scroll down to Servers section. There should be anywhere between 2-10 visible. Explain that these \u0026ldquo;servers\u0026rdquo; reflect the different Azure Functions instances getting triggered, and handling the image processing from blob to CosmosDB. Zoom in on the sample telemetry to the right hand side. Explain how the different API-streams of the application topology are visible here. Notice how it shows the Azure Function call \u0026ldquo;SavePlateData\u0026rdquo;, as well as interaction with \u0026ldquo;Azure Computer Vision\u0026rdquo;, etc\u0026hellip; Depending when you opened the Live Metrics view, the sample telemetry should have a red item Dependency, which simulates an issue from the Azure Function to Cognitive Service, showing you the details of the API POST Action call. Next, select Application Map within Application Insights. Explain the usage of Application Map, describing the 2 different views here. The first view %youralias%events, shows the number of running (Azure Functions) instances, with different metrics (performance details). The Events are representing communication with Azure CosmosDB. It shows the number of database calls, as well as the average performance between the Event Functions and CosmosDB. Select the value metric in the middle between Events and CosmosDB, to open the more detailed view. This opens a blade to the right-hand side of the Azure Portal, exposing many more details about the processing of events. It shows details about the CosmosDB instance, as well as performance details of each CosmosDB action (GET, Create Document, Get Collection, etc\u0026hellip;)\nClick on Investigate Performance\nUse this detailed dashboard to explain the different sections, reflecting chart representations of actual Log Analytics Queries. This can be demoed by selecting View Logs from the top menu, selecting a section, and opening it in Log Analytics. Conclusion Application Insights, as part of Azure Monitor, is a powerful tool for monitoring and optimizing the performance of your applications. Its comprehensive features, seamless integration with other Azure services, and real-time insights make it an essential component of any modern monitoring strategy. By leveraging Application Insights, you can ensure that your applications run smoothly, deliver a great user experience, and achieve your business goals.\nCheers!!\n/Peter\n","date":"2025-03-06T00:00:00Z","permalink":"/post/azure-spring-clean---application-insights---inside-out/","title":"Azure Spring Clean - Application Insights - Inside Out"},{"content":"In this post, I want to share my review of my next technical book I read recently, Azure Cookbook this time from Massimo Bonammi and Marco Obinu published by BPB Online and available on Amazon as well as other e-book subscription platforms.\nI have the joy of calling both fine Italian gentlemen my friends and colleagues for a few years already. I bumped into Marco around 2017 when I was delivering an Azure Architect workshop for his employer. Massimo was one of my EMEA colleagues in the Microsoft Technical Trainer team and we had the pleasure of delivering a few in-person training days together before COVID. Marco joined the Technical Trainer team about 2 years later, when I was already relocated to the USA.\nWhen I got approached to do a technical book review of their work, I didn\u0026rsquo;t hesitate for a second. Not that I was their target audience, but see it as the ultimate combination of fellow Azure experts, bringing in our mutual love for good food and Azure, and having fun while reading through a book :).\nBook Review The Azure Cookbook is a comprehensive guide that offers over 75 practical recipes to help you tackle common Azure challenges in everyday scenarios. Whether you\u0026rsquo;re a seasoned Azure professional or just starting your journey with Microsoft\u0026rsquo;s cloud platform, this book provides valuable insights and solutions to enhance your Azure experience.\nThe book is structured to address key tasks any Azure Cloud Administrator, Developer and/or Azure Solution Architect should know about. Think of setting up permissions for a storage account, working with Cosmos DB APIs, managing Azure role-based access control, governing your Azure subscriptions using Azure Policy. And so many more (75 remember\u0026hellip;)\nEach recipe is meticulously crafted to provide step-by-step instructions, making it easy for readers to follow along and implement the solutions in their own environments. Over the different recipes, you will sometimes use only steps in the Azure Portal, sometimes relying on Azure CommandLine Interface (CLI) with Bash or PowerShell, as well as using Infrastructure as Code with ARM templates and Bicep. And often, a good combination of all these in a single recipe.\nOne of the standout features of the Azure Cookbook is its focus on real-world administrative tasks. The authors have drawn from their extensive experience as cloud experts as well as Technical Trainers and public speakers, to present scenarios that you are most likely to encounter in your day-to-day work. This practical approach ensures that the solutions provided are not only theoretically sound but more so, are highly applicable in real-world settings.\nThe book covers a wide range of topics, including:\nStorage Management: Learn how to set up and manage Azure storage accounts, configure access permissions, and optimize storage performance. Database Integration: Explore recipes for working with Azure Cosmos DB, SQL Database, and other database services, including tips for optimizing performance and ensuring data security. Access Control: Understand how to manage Azure role-based access control (RBAC) to ensure that your resources are secure and accessible only to authorized users. Policy Management: Discover how to use Azure Policy to govern your subscriptions and ensure compliance with organizational standards. Networking: Gain insights into configuring virtual networks, setting up VPNs, and managing network security groups to protect your Azure resources. Automation: Learn how to automate common tasks using Azure Automation, PowerShell, and Azure CLI, saving you time and reducing the risk of human error. The Azure Cookbook also includes tips and best practices for optimizing your Azure environment, ensuring that you get the most out of your cloud investment. The authors provide clear explanations and detailed examples, making complex concepts easy to understand and implement.\nIn addition to the technical content, the book also emphasizes the importance of continuous learning and staying up-to-date with the latest Azure features and updates. Marco and Massimo encourage readers to explore new tools and techniques, fostering a culture of innovation and growth.\nHaving worked with Azure almost fulltime since 2013, I still enjoyed going through these exercises, as several of them helped me refreshing my skills on services I somewhat neglected over the years (that\u0026rsquo;s what happens when you move from an infrastructure background to devops, application development and AI). So don\u0026rsquo;t make the mistake thinking this book is only for new upcoming (cloud) chefs. Even veteran cooks need to change pots and pans every now and then, LOL.\nSummary Overall, the Azure Cookbook is an invaluable resource for anyone working with Azure. Its practical recipes, real-world scenarios, and expert insights make it a must-have for IT professionals, developers, and cloud architects. Whether you\u0026rsquo;re looking to solve specific challenges or simply want to deepen your understanding of Azure, this book provides the knowledge and tools you need to succeed.\nDon\u0026rsquo;t miss out on this opportunity to enhance your Azure skills and take your cloud expertise to the next level. Get your copy of the Azure Cookbook today and start exploring the endless possibilities of Microsoft\u0026rsquo;s cloud platform!\nIf you\u0026rsquo;re still looking for a belated Christmas stocking stuffer, an overall little Holidays present for yourself or your acquintances, let this be a great recommendation.\nIf you got access to the book and reading it, or have read it already, don\u0026rsquo;t hesitate reaching out and providing your feedback.\nCheers!!\n/Peter\n","date":"2024-12-26T00:00:00Z","permalink":"/post/book-review---azure-cookbook/","title":"Book Review - Azure Cookbook"},{"content":"In this post, I want to share my review of my next technical book I read recently, Data Science in .NET with Polyglot Notebooks this time from Matt Eland, (https://bsky.app/profile/matteland.dev) published by Packt Publishing and available on Amazon as well as other e-book subscription platforms.\nIf you have been following me for a while, you know I\u0026rsquo;m gradually learning more about coding and developing applications, especially using the .NET framework. More recently, I also started using Jupyter Notebooks to animate my Azure AI workshops - specifically Semantic Kernel demos - a bit more, by running .NET Interactive mode, which allows me to show snippets of .NET code running from the Notebook, instead of the (boring) Terminal window.\nSeeing Matt\u0026rsquo;s post on Twitter regarding his upcoming book, immediately got my attention, given the technology he was about to cover. (Honestly, the Data Science part initially didn\u0026rsquo;t do it for me, LOL)\nBook Review \u0026ldquo;Data Science in .NET with Polyglot Notebooks\u0026rdquo; is an insightful guide aimed at experienced .NET developers (which is not me\u0026hellip;) who are eager to delve into the realms of data science, machine learning, and AI. This book stands out for its practical approach, leveraging the familiar .NET ecosystem to introduce complex data science concepts through interactive experiments. Reason I call out experienced .NET developers here, is because the book provides a wealth of code examples and data science specific. While I understand more and more about .NET and am able to build sample (demo) apps, in a lot of chapters, the data science piece was way above my head. But that doesn\u0026rsquo;t mean anything, and definitely shouldn\u0026rsquo;t stop YOU from going through the book if you are in the Data Science space.\nMatt\u0026rsquo;s journey from \u0026lsquo;just being a .NET developer = his words\u0026rsquo; to a developer-data scientist is reflected in the structure and content of the book. He effectively bridges the gap between traditional software development and modern data science, making it accessible for those already proficient in .NET technologies. The book covers a wide range of topics, including data analysis, data visualization, machine learning with ML.NET, and AI orchestration using tools like OpenAI and Semantic Kernel.\nOne of the book\u0026rsquo;s biggest strengths is its hands-on approach. Each chapter is designed to be interactive, encouraging readers to experiment with code in VS Code or GitHub Codespaces. (With the code being available on GitHub, other IDE development environments can obviously be used as well if that\u0026rsquo;s more in your wheelhouse\u0026hellip;) This method not only reinforces learning but also helps developers (even juniors like myself) to see the immediate application of their skills in new domains. Which for me was the Semantic Kernel topic primarily.\nThe book is well-organized, starting with the basics of what Jupyter Notebooks are, a good overview of .NET developing framework, followed by data science and gradually moving towards more advanced topics.\nFrom the list below, the bold highlighted were most relevant to the knowledge I wanted to gain:\nChapter 1: Data Science, notebooks, and kernels Chapter 2: Exploring Polyglot Notebooks Chapter 3: Getting Data \u0026amp; Code into Your Notebooks Chapter 4: Working with Tabular Data \u0026amp; DataFrames Chapter 5: Visualizing Data Chapter 6: Visualizing Variable Relationships Chapter 7: Classification Experiments with ML.NET AutoML Chapter 8: Regression Experiments with ML.NET AutoML Chapter 9: Beyond AutoML: Pipelines, Trainers, \u0026amp; Transforms Chapter 10: Deploying machine learning models Chapter 11: Generative AI in Polyglot Notebooks Chapter 12: AI Orchestration with Semantic Kernel Chapter 13: Enriching documentation with Mermaid diagrams Chapter 14: Extending Polyglot Notebooks Chapter 15: Adopting and deploying Polyglot Notebooks The most interesting - but again, harder chapters for me because of being new to the Data Scientist craft - were the chapters on working with tabular data, visualizing data, and performing classification and regression experiments with ML.NET AutoML are particularly noteworthy. Additionally, Matt doesn\u0026rsquo;t hold back from delving into the deployment of machine learning models and the integration of generative AI, providing a comprehensive overview of the current capabilities of .NET in the data science space. From which I learned most by reading through this book to be honest.\nMatt\u0026rsquo;s writing is clear and engaging, making complex topics approachable. His personal anecdotes and practical tips add a relatable touch, making the book not just a technical manual but also a narrative of his own learning journey. Which is also something I always try to add to my own technical books. Sure, you want to get the tech stuff, but hey, writing tech books is done by a human being with a certain set of experience. You want the tech stuff, you get the human being view for free\u0026hellip;\nSummary In conclusion, \u0026ldquo;Data Science in .NET with Polyglot Notebooks\u0026rdquo; is a valuable resource for .NET developers looking to expand their skill set into data science and AI. It offers a blend of theoretical knowledge and practical application, making it a must-read for anyone interested in the intersection of .NET and data science.\nIf you\u0026rsquo;re still looking for a Christmas stocking stuffer, an overall little Holidays present for yourself or your acquintances, let this be a great recommendation.\nIf you got access to the book and reading it, or have read it already, don\u0026rsquo;t hesitate reaching out and providing your feedback.\nTo Matt\u0026hellip; thanks man, this is a great work of art! Thanks for inspiring me to continue expanding my skillset on .NET, AI, Semantic Kernel, by throwing Notebooks at me, and teasing my brain with Data Science\u0026hellip;\nCheers!!\n/Peter\n","date":"2024-12-07T00:00:00Z","permalink":"/post/packt-book-review---data-science-in-.net-with-polyglot-notebooks/","title":"Packt Book Review - Data Science in .NET with Polyglot Notebooks"},{"content":"Introduction to Developing Azure AI Solutions In today\u0026rsquo;s rapidly evolving tech landscape, Artificial Intelligence (AI) has become a cornerstone for innovation. Azure AI offers a robust suite of tools and services that empower developers to build intelligent applications. From natural language processing (NLP) to computer vision, Azure AI provides the building blocks to create solutions that can understand, interpret, and respond to human inputs in a meaningful way. While Microsoft has several Copilot offerings for different use cases, ranging from an AI assistant in Azure, Copilot in M365 or web and mobile, there are still valid use cases for developing your own custom Copilot. One of the key components in this ecosystem is the Semantic Kernel, a powerful tool that enhances the capabilities of AI models by providing semantic understanding.\nThe Kernel is the central component of Semantic Kernel. In its easiest format, the Kernel is a Dependency Injection objects, which manages all of the services and plugins necessary to run your AI application. If you provide all of your services and plugins to the kernel, they will then be seamlessly used by the AI as needed.\nWhat is Semantic Kernel? Semantic Kernel is a framework designed to bridge the gap between raw data and meaningful insights. It leverages advanced machine learning algorithms to understand the context and semantics of the data, enabling more accurate and relevant responses. Unlike traditional keyword-based approaches, Semantic Kernel focuses on the meaning behind the words, making it a valuable asset for applications that require a deep understanding of language.\nDifference Between Using Semantic Kernel and Other Solutions Such as PromptFlow While both Semantic Kernel and PromptFlow are designed to enhance AI capabilities, they serve different purposes and offer unique advantages. PromptFlow is a tool that helps in designing and managing prompts for AI models, ensuring that the inputs are structured in a way that maximizes the model\u0026rsquo;s performance. On the other hand, Semantic Kernel goes a step further by interpreting the meaning behind the inputs, providing a more nuanced and context-aware response.\nKey Differences: Focus: PromptFlow is primarily concerned with the structure and format of prompts, while Semantic Kernel focuses on understanding the semantics and context. Use Cases: PromptFlow is ideal for scenarios where the input needs to be carefully crafted to elicit the desired response from the AI model. Semantic Kernel is better suited for applications that require a deep understanding of language and context. Complexity: Semantic Kernel involves more complex algorithms and models to interpret the data, whereas PromptFlow is more straightforward in its approach. Sample Code Scenarios of Using Semantic Kernel, Using C# .NET Code Scenario 1: Text Classification 1 2 3 4 5 6 7 8 9 10 using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Models; var kernel = new SemanticKernel(); var model = kernel.LoadModel(\u0026#34;text-classification-model\u0026#34;); var inputText = \u0026#34;Azure AI is transforming the tech industry.\u0026#34;; var classification = model.Classify(inputText); Console.WriteLine($\u0026#34;Classification: {classification}\u0026#34;); Scenario 2: Sentiment Analysis 1 2 3 4 5 6 7 8 9 10 using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Models; var kernel = new SemanticKernel(); var model = kernel.LoadModel(\u0026#34;sentiment-analysis-model\u0026#34;); var inputText = \u0026#34;I love using Azure AI services!\u0026#34;; var sentiment = model.AnalyzeSentiment(inputText); Console.WriteLine($\u0026#34;Sentiment: {sentiment}\u0026#34;); Scenario 3: Named Entity Recognition (NER) 1 2 3 4 5 6 7 8 9 10 11 12 13 using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Models; var kernel = new SemanticKernel(); var model = kernel.LoadModel(\u0026#34;ner-model\u0026#34;); var inputText = \u0026#34;Microsoft was founded by Bill Gates and Paul Allen.\u0026#34;; var entities = model.RecognizeEntities(inputText); foreach (var entity in entities) { Console.WriteLine($\u0026#34;Entity: {entity.Name}, Type: {entity.Type}\u0026#34;); } Scenario 4: Question Answering 1 2 3 4 5 6 7 8 9 10 using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Models; var kernel = new SemanticKernel(); var model = kernel.LoadModel(\u0026#34;question-answering-model\u0026#34;); var question = \u0026#34;What is Azure AI?\u0026#34;; var answer = model.AnswerQuestion(question); Console.WriteLine($\u0026#34;Answer: {answer}\u0026#34;); Conclusion Semantic Kernel is a powerful tool that enhances the capabilities of AI models by providing a deeper understanding of language and context. By leveraging Semantic Kernel, developers can build more intelligent and responsive applications that go beyond simple keyword matching. Whether you\u0026rsquo;re working on text classification, sentiment analysis, named entity recognition, or question answering, Semantic Kernel offers the tools and frameworks needed to create sophisticated AI solutions. As AI continues to evolve, tools like Semantic Kernel will play a crucial role in shaping the future of intelligent applications.\nI hope this article provides you with a comprehensive overview of Semantic Kernel and its applications. If you have any questions or need further details, feel free to ask! In a next blog post, we\u0026rsquo;ll go over several use cases with many more code snippets, to give you enough examples to start building/developing your own Copilots. Stay tuned!\nCheers!!\n/Peter\n","date":"2024-09-15T00:00:00Z","permalink":"/post/introduction-to-semantic-kernel/","title":"Introduction to Semantic Kernel"},{"content":"Introduction I got invited to present on Blazor .NET8 as part of the [https://mstechsummit.pl/en/](MS Tech Summit Poland (MSTS Summit)), for which I\u0026rsquo;m very excited and honored. For most of my public speaking engagements, I try to focus on live demos, with only a minimum amount of slides, and this session is no different.\nTo help my audience in reproducing the demos in their own time, I decided to write out the steps.\nThis app introduces Blazor .NET8 development, and more specifically how to easily create a Single Page App using HTML, CSS and a bit of C# code. Once the app is live, I expand with data integration features, using Entity Framework and making API calls to an external API Service.\nWhile I\u0026rsquo;ve been using Blazor .NET for about 3 years now as a hobby project, I feel like I am still learning development with .NET for the first time at age 48. Having succeeded in getting an actual app up-and-running, I wanted to continue sharing my experience, inspiring other readers (and viewers of the MSTS session) to learn coding as well. And maybe becoming a passionate Blazor developer as myself.\nPrerequisites If you want to follow along and building this sample app from scratch, you need a few tools to get started:\nVisual Studio 2022 version 17.9.7 or newer, to develop the application (VSCode or other dev tools will work as well, but I\u0026rsquo;m not that familiar with those\u0026hellip;) Community Edition can be downloaded for free here (Visual Studio 2022 Community Edition â€“ Download Latest Free Version (microsoft.com)) GitHub Account to store the application code in source control Sign Up for free here (https://github.com/join) Azure Subscription to run Azure App Service web application Get a Free Azure Subscription here (https://azure.microsoft.com/en-us/free/) Deploying your first Blazor Web Assembly app from a template Visual Studio (and .NET) provide different Blazor templates, both as an â€œempty templateâ€, as well as one with a functional â€œsample weather appâ€, and both options are available for Server and Web Assembly.\nWith the release of .NET8 last November, the Product Group decided to simplify getting started with Blazor, using the Blazor Web App template. Actually allowing you to decide wether to use WebAssembly, Server or both, in the same project.\nLaunch Visual Studio 2022, and select Create New Project From the list of templates, select Blazor Web App Provide a name for the project, e.g. BlazorMSTS, and store it to your local machine\u0026rsquo;s folder of choice\nClick Next to continue the project creation wizard\nSelect .NET 8.0 (Long Term Support) as Framework version\nSelect None for authentication type\nSelect Server for Interactive Render Mode\nSelect Per Page/component for Interactivity location\nClick Create to complete the project creation wizard and wait for the template to get deployed in the Visual Studio development environment. The Solution Explorer looks like below:\nRun the app by pressing Ctrl-F5 or select Run from the upper menu (the green arrow) and wait for the compile and build phase to complete. The web app should load successfully in a new browser window. Wander around the different parts of the web app to get a bit familiar with the features. With the Blazor Server hosting model, components are executed on the server from within an ASP.NET Core app. UI updates, event handling, and JavaScript calls are handled over a SignalR connection using the WebSockets protocol. The state on the server associated with each connected client is called a circuit. Circuits aren\u0026rsquo;t tied to a specific network connection and can tolerate temporary network interruptions and attempts by the client to reconnect to the server when the connection is lost.\nClose the browser, which brings you back into the Visual Studio development environment.\nThis confirms the Blazor Server app is running as expected.\nIn the next section, you learn how to update the Home.razor page and add your own custom HTML-layout, CSS structure and actual runtime code.\nUsing the sample app to understand the core of Blazor In the Blazor running app, navigate to the Counter page by clicking the Counter option in the navigation sidebar to your left. Selecting the Click me button will perform an increment of the current count without a page refresh. Having this kind of interactivity, used to require JavaScript, but with Blazor you can use C# now.\nYou can find the implementation of the Counter component at Counter.razor file located inside the Components/Pages directory.\nWe talked about Components in our presentation as well. Every razor page, can be a component. Let\u0026rsquo;s use that in the next example:\nOpen the Home.razor file in Visual Studio. The Home.razor file already exists, and it could be seen as the replacement for the former index.html or default.asp in previous web applications. It\u0026rsquo;s located in the Components/Pages folder inside the BlazorApp directory that was created earlier.\nAdding the Counter component - coming from the Counter page - to the app\u0026rsquo;s homepage is possible by adding a element at the end of the Home.razor file. That\u0026rsquo;s it!!\n1 2 3 4 5 6 7 8 9 @page \u0026#34;/\u0026#34; \u0026lt;PageTitle\u0026gt;Home\u0026lt;/PageTitle\u0026gt; \u0026lt;h1\u0026gt;Hello, world!\u0026lt;/h1\u0026gt; Welcome to your new app. \u0026lt;Counter /\u0026gt; Running the app again, will now show the Counter component, nicely on the Home Page. How easy, yet cool is that? DevOps engineers would call this minimizing technical debt, as instead of reusing and duplicating code, you can now just reuse full components.\nUpdating the template with your custom code Blazor allows you to combine web page layout code (Razor pages), basically HTML and CSS, together with actual application source code (C# DotNet), in the same razor files. I canâ€™t compare it with previous development environments, but it seems to be one of the great things about Blazor â€“ and I really like it, since itâ€™s somewhat simplifying the structure of your application source code itself.\nTraditionally, this means creating the necessary HTML and CSS layout, followed by writing the Code-piece. Little bit what we talked about with the Counter.Razor page.\nMost Web Apps rely or provide some sort of Data back-end, to allow users to pull up information, or maybe creating and editing new information into a database. in .NET, this usually gets done by Entity Framework, allowing interaction with different kinds of databases, such as SQL Server, but also Azure SQL, Azure Cosmos DB as well as non-Microsoft scenarios such as Oracle or Postgresql and others.\nThe cool thing is, Visual Studio provides a Scaffolding Wizard for Entity Framework, which automates a big part of the process of creating web page entry form, as well as the different CRUD (Create, Read, Update, Delete) operations - both the layout, as well as the logical coding piece behind the different action buttons, is getting created.\nLet\u0026rsquo;s check out what that looks like.\nUsing DotNet Entity Framework Scaffolding for Razor/Blazor The starting point for any data content interaction in a web app, starts with a data model. This is a C#-class, which contains the structure of the actual data you want to use. In this example, let\u0026rsquo;s consider working with Conference Data, such as a Conference Session Title, a Speaker, Session abstract, Technical Domain, Session duration, etc\u0026hellip; A basic model class could look like this:\n1 2 3 4 5 6 7 8 9 10 public class ConferenceSession { public int Id { get; set; } public string? Title { get; set; } public string? Speaker { get; set; } public string? Abstract { get; set; } public string? TechnicalDomain { get; set; } public int Duration { get; set; } public bool IsPublished { get; set; } } In the Blazor Project folder Components, create a new subfolder \u0026ldquo;Data\u0026rdquo;, and create a file ConferenceSession.cs, in which you copy the above sample content. With the Class Model in place, you can now make use of the Scaffolding wizard. From the Project, right-click, and select Add New Scaffolded Item From the list of options, select Razor Component and Razor Components Using Entity Framework (CRUD) From the popup window, complete the necessary settings: Template: CRUD (this provides Create, Read, Update and Delete functionality; in a real-world application, you might select only one or more options) Model Class: ConferenceSession, which refers to the Model Class created earlier DbContext Class: New/Add - accept the default name, or any other name of choice Database Provider: SQL Server This will install the necessary Microsoft.EntityFrameworkCore Nuget Packages, creates the DBContext to interact with SQL Server, but - and this is rather cool - it will also create the necessary Razor Pages for the data model, including the CRUD action links.\nBelow the /ConferenceSessionPages subfolder, notice the different Create, Delete, Edit, Details and Index pages. Open the Create.razor page in the Visual Studio editor: , and check the first couple of lines:\n1 2 3 4 @page \u0026#34;/conferencesessions/create\u0026#34; @inject BlazorApp1.Data.BlazorApp1Context DB @using BlazorApp1.Components.Data @inject NavigationManager NavigationManager the @page directive, points to the URL address to use to connect to this page; the @inject refers at Dependency Injection, a capability of .NET to recognize \u0026lsquo;services\u0026rsquo; such as Database interaction, Navigation Menu Manager, etc. the @using directive tells this page, to recognize the content of the Data folder within the project (where the ConferenceSession class model is created)\nThe next code block contains the HTML layout of the actual Conference Session items.\nThe last code block, in between the @code {} section, is the C# code allowing us to create new items, and interact with the SQL Server DB Context.\nOpen the Program.cs file; notice the builder.Services.AddDbContext lines of code, which refers at the SQL Server Database integration service. Also created as part of the Scaffolded Item wizard.\nBefore you can run the actual app, we need to initialize the actual database and Database context, for which we need to run some commandline actions. From the Visual Studio menu, select Tools / Nuget Package Manager Console.\nrun:\n1 Add-Migration ConferenceSessions next, run:\n1 Update-Database which recognizes our ConferenceSession.cs Model Class, and transforms it into SQL Query language.\nLet\u0026rsquo;s run the app again, and validate our ConferenceSession CRUD Pages. If you remember from the set of Pages created for us, one of them is the Index.razor, which has a @page directive of /conferencesessions. This means, if we browse to our app default URL, and add /conferencesessions, it will provide us with the \u0026lsquo;home page\u0026rsquo; of the Conference Sessions. Let\u0026rsquo;s try that. Click the Create New link, which redirects you to the /conferencesessions/create page. Complete the fields and click \u0026lsquo;Create\u0026rsquo; to save the record With the new record saved, you get redirected back to the Index page; notice the line item is there, together with a few additional CRUD links to the side for Editing, Deleting and opening the Details of the item. Wasn\u0026rsquo;t that cool? Think for a minute how powerful this is\u0026hellip; from scratch to having a somewhat workable app ready in less than 20 minutes!\nWith the main parts of the app \u0026lsquo;ready\u0026rsquo; (trust me, there is a lot more we can continue working on, which I might actually do in later continuing blog posts\u0026hellip;), you might finish this process - which is not part of the MSTS Summit session because of time limits - and publish this to Azure Static Web Apps. the below steps should guide you through the process.\nPublish Blazor Web Assembly app to Azure Static Web Apps In this last section, I will show you how to publish this webapp to Azure Static Web Apps, a web hosting service in Azure for static web frameworks like Blazor, React, Vue and several other.\nFrom the Azure Portal, create new resource / static web app\nProvide base information for this deployment:\nResource group â€“ any name of choice\nName of the app â€“ any unique name for the app\nSource = GitHub\nPlan = Free\nRegion = any region of your choice\nScroll down and authenticate to GitHub; Next, select your source repo in Github where the code is stored (the one we just created)\nClick Build Details to provide more parameters regarding the Blazor app itself. Note you need to change the default App location from /Client to /, since our source code is in the root of the Blazor Web Assembly, without using ASP.Net hosted back-end.\nOnce published, it will trigger a GitHub Actions pipeline to publish the actual content\nThe YAML pipeline code is stored in the .github/workflows/ subfolder within the GitHub repository. You shouldnâ€™t need to update this file though. It just works out-of-the-box.\nCheck in Actions whatâ€™s happening:\nOpen the details for the Build \u0026amp; Deploy workflow\nSelecting any step in the Action workflow will show more details:\nWait for the workflow to complete successfully.\nNavigate back to the Azure Static Web app, click itâ€™s URL and see the Blazor Web App is running as expected.\nSummary In this article, I provided all the necessary steps to build a Blazor .NET 8 Web Server application. Started from the default template, you updated snippets of code to inject Components, and we also used the Scaffolded Item wizard to provide CRUD operations to a data model.\nI would like to thank the organizing team of MS Tech Summit Poland 2024 for having accepted my session submission for the 3rd year in a row. Especially since this was my first attempt to do some (semi)live coding, to share my excitement of how I learned to write and build code at age 48. Iâ€™m already brainstorming on what Blazor app I can share in next yearâ€™s edition\u0026hellip;\n/Peter\n","date":"2024-05-30T00:00:00Z","permalink":"/post/building-your-first-blazor-.net8-app---msts-summit-companion/","title":"Building your first Blazor .NET8 app - MSTS Summit "},{"content":"Building a Marvel Hero catalog app using Blazor Server and .NET8\nIntroduction At the end of 2022, as part of the Festive Tech Calendar community initiative, I provided a step-by-step instruction blog on how to build a Blazor Web Assembly app from scratch, using .NET7.\nAbout 18 months later, a lot of things have changed in the .NET8 world, which also impacted positively new features around the Blazor Web App Framework, on both Web Assembly (Client/Browser) and Server side.\nI decided to rewrite/update the steps, using the same idea for the app, but this time redeveloping it from scratch, using .NET8 and Blazor WebAssembly RenderMode. If you want to see the live coding in action, head over to https://www.scifidevcon.com, a great community initiative to celebrate the month of May, Geekiness, Developing, cloud and everything else that fits in the combination of all those topics in a virtual conference.\nThis app introduces Blazor .NET8 development, and more specifically how to easily create a Single Page App using HTML, CSS and API calls to an external API Service at https://developer.marvel.com\nWhile I\u0026rsquo;ve been using Blazor .NET for about 3 years now as a hobby project, I feel like I am still learning development with .NET for the first time at age 48. Having succeeded in getting an actual app up-and-running, I wanted to continue sharing my experience, inspiring other readers (and viewers of the ScifiDevCon session) to learn coding as well. And maybe becoming a passionate Marvel Comics fan as myself.\nPrerequisites If you want to follow along and building this sample app from scratch, you need a few tools to get started:\nVisual Studio 2022 version 17.9.7 or newer, to develop the application (VSCode or other dev tools will work as well, but I\u0026rsquo;m not that familiar with those\u0026hellip;) Community Edition can be downloaded for free here (Visual Studio 2022 Community Edition â€“ Download Latest Free Version (microsoft.com)) GitHub Account to store the application code in source control Sign Up for free here (https://github.com/join) Azure Subscription to run Azure App Service web application Get a Free Azure Subscription here (https://azure.microsoft.com/en-us/free/) Marvel Developer Account to get access to the API back-end Register for free at https://developer.marvel.com Deploying your first Blazor Web Assembly app from a template Visual Studio (and .NET) provide different Blazor templates, both as an â€œempty templateâ€, as well as one with a functional â€œsample weather appâ€, and both options are available for Server and Web Assembly.\nWith the release of .NET8 last November, the Product Group decided to simplify getting started with Blazor, using the Blazor Web App template. Actually allowing you to decide wether to use WebAssembly, Server or both, in the same project.\nLaunch Visual Studio 2022, and select Create New Project From the list of templates, select Blazor Web App Provide a name for the project, e.g. BlazorMarvel8, and store it to your local machine\u0026rsquo;s folder of choice\nClick Next to continue the project creation wizard\nSelect .NET 8.0 (Long Term Support) as Framework version\nSelect None for authentication type\nSelect WebAssembly for Interactive Render Mode\nSelect Per Page/component for Interactivity location\nClick Create to complete the project creation wizard and wait for the template to get deployed in the Visual Studio development environment. The Solution Explorer looks like below:\nRun the app by pressing Ctrl-F5 or select Run from the upper menu (the green arrow) and wait for the compile and build phase to complete. The web app should load successfully in a new browser window. Wander around the different parts of the web app to get a bit familiar with the features. In the context of .NET 8, Blazor WebAssembly projects typically consist of two separate projects: an app and an app.client. The app project is the server-side part of the Blazor application, which can serve pages or views as a Razor Pages or MVC app. The app.client project contains the client-side Blazor application that runs in the browser on a WebAssembly-based .NET runtime. The separation of the client and server projects in the Blazor WebAssembly hosting model provides a clear separation of concerns, allowing for server-side functionality, integration with ASP.NET Core features, and flexibility in hosting and deployment options. This architecture aligns well with the server-client model and the capabilities of the ASP.NET Core framework.\nFor instance, Blazor WebAssembly can be standalone for simple, offline apps, but having a separate server project unlocks improved security, scalability, complex server tasks, and potential offline features, making it ideal for more elaborate and demanding applications. As your application grows or requires server-side functionality, having a separate server project provides a scalable and maintainable architecture.\nThis design pattern, where decoupling or loosely coupled apps are encouraged, is preferred over tightly coupled applications, especially as the complexity of the project increases.\nClose the browser, which brings you back into the Visual Studio development environment.\nThis confirms the Blazor Server app is running as expected.\nIn the next section, you learn how to update the Home.razor page and add your own custom HTML-layout, CSS structure and actual runtime code.\nUpdating the template with your custom code Blazor allows you to combine web page layout code (Razor pages), basically HTML and CSS, together with actual application source code (C# DotNet), in the same razor files. I canâ€™t compare it with previous development environments, but it seems to be one of the great things about Blazor â€“ and I really like it, since itâ€™s somewhat simplifying the structure of your application source code itself.\nAnother take is creating the web page layout first, and only adding logic later on. So letâ€™s start with creating a basic web page, adding a search field and a button. All Razor Pages the app uses, are typically stored in the \\Components subfolder.\nYou can chose to reuse the Home.razor sample page and continue from there, or create a new Razor Page and update the route path. To show you how Blazor Components are working, let\u0026rsquo;s define our SearchMarvel page, as a new page under the \\Pages section in the Client project. Save it as SearchMarvel.razor\nIn this part, we start with adding a search field and a search button to the web page layout. Insert the following snippet of code, replacing all the current content on the page:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 @page \u0026#34;/searchmarvel\u0026#34; \u0026lt;h1 class=\u0026#34;text-center text-primary\u0026#34;\u0026gt; Blazor Marvel Finder\u0026lt;/h1\u0026gt; \u0026lt;div class=\u0026#34;text-center\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;p-2\u0026#34;\u0026gt; \u0026lt;input class=\u0026#34;form-control form-control-lg w-50 mx-auto mt-4\u0026#34; placeholder=\u0026#34;Enter Marvel Character\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;p-2\u0026#34;\u0026gt; \u0026lt;button class=\u0026#34;btn btn-primary btn-lg\u0026#34;\u0026gt;Find your Favorite Marvel Hero\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; This adds the necessary objects on the web page. And letâ€™s run this update to see what we have for now. So the layout for the search part of the app is done. Letâ€™s move on with the design of the actual response / result items. The return from the Marvel API can be presented in a table gridview, but thatâ€™s not that nice-looking; I remembered having physical cards as collector items as a kid, so I did some searching for a similar digital experience. Interesting enough, there is a CSS-class object â€œcardâ€, which nicely reflects this experience. So letâ€™s add the next snippet of code for this response layout.\nAdd the following code, below the snippet you copied earlier:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;row row-cols-1 row-cols-md-2 row-cols-lg-3\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;https://via.placeholder.com/300x200\u0026#34; class=\u0026#34;card-img-top\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card-body\u0026#34;\u0026gt; \u0026lt;h5 class=\u0026#34;card-title\u0026#34;\u0026gt;Marvel Hero Name\u0026lt;/h5\u0026gt; \u0026lt;p class=\u0026#34;card-text\u0026#34;\u0026gt; Character details \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; What this snippet does, is adding a â€œcontainerâ€ object, in which we create a small table view having 3 rows and 1 column. The card composition shows the Hero title on top, the Marvel Hero character details in the middle and an image of the character as well.\nLetâ€™s run the code again to test if everything works as expected. Now wait, we lose quite some time on stopping the app, updating code, starting it again â€“ so what we can do is use the new VS2022 feature called Hot Reload / if I set this to â€œHot Reload on Saveâ€, it will dynamically update the runtime state of the app based on my edits. Letâ€™s check it out. While in debugging mode, check the â€œflameâ€ icon in the menu: Enable the setting â€œHot Reload on File Saveâ€. Edit the card-title â€œMarvel Hero Nameâ€ to â€œMarvel Character Nameâ€ and check how the app refreshes itself without needing to stop/start. The search field is not doing anything yet, so we need to make sure â€“ that whenever we type something in that field - it kicks of an API call to the Marvel API back-end.\nFirst, we need to use the bind-value parameter for this field, linking it to a search task; Update the line with the field box as follows: 1 2 \u0026lt;input class=\u0026#34;form-control form-control-lg w-50 mx-auto mt-4\u0026#34; placeholder=\u0026#34;Enter Marvel Character\u0026#34; @bind-value=\u0026#34;whotofind\u0026#34; /\u0026gt; add @bind-value=â€whotofindâ€ at the end of the line.\nIgnore the errors regarding the â€œwhotofindâ€ for now. We\u0026rsquo;ll fix that in a minute.\nNext, we need to update the button code to actually pick up an action when clicking on it; this is done using the @onclick event Add **@onclick=FindMarvel** The code snippet complains about unknown attributes, which is what we need to add in the actual code section of the app page:\nAdd the following @code section below the HTML/CSS layout\n1 2 3 4 5 6 7 8 9 @code { private string whotofind; private async Task FindMarvel() { // Call the Marvel API Console.WriteLine(\u0026#34;Marvel Character to find: \u0026#34; + whotofind); } } Within the curly brackets, we can use regular C# code Start with defining a string for the â€œwhotofindâ€ Followed by defining a method (task) for the FindMarvel onclick action â€“ for now, letâ€™s write something to the console to validate our search field is working as expected The string â€œwhotofindâ€ refers to the search field object, where the Task â€œFindMarvelâ€ refers to the button click action. So easy said, whenever we click the button, it will pick up the string content from the search field, and send it to the Marvel API back-end. As we donâ€™t have that yet, Iâ€™m just writing the data to the console, which is always a great test to validate the code is working as expected.\nSave the file, which will throw a warning regarding the hot reload. Since we added new actual code snippets, hot reload canâ€™t just go and recognize it. So a reload is neededâ€¦ Select â€œRebuild and Apply Changesâ€\nEnter the name of a Marvel character, for example â€œthorâ€, and notice nothing happens on the console/terminal. Why not?? Is it an error in our code, did we miss anything,\u0026hellip;?\nNO and YES :)\u0026hellip; our code is fine, but we are missing a new .NET8 feature for Blazor apps\u0026hellip; we need to specify the Render Mode\nNote \u0026gt; for more details on Blazor app Render Mode, check this article I guest-authored for SitePoint a few weeks ago, explaining the different options and how to use them.\nRemember when we created the Visual Studio project, we defined the WebAssembly Render Mode. Now to make this work, there are a few more changes needed in the source code: a) Define the InteractiveWebAssembly Render Mode in the App.razor file b) (Optionally), specify the InteractiveWebAssembly Render Mode for Pages and/or Components\nSo depending a bit on how much control you want, or how frequently you want to use Interactive Render mode, you would specify this in your App.razor, as a Global parameter - turning the full Blazor App into that mode. Or, if not all pages and/or components require that Render Mode, you can add the specific parameter to individual components.\nIn this example, I\u0026rsquo;ll show you how to use it on a \u0026lsquo;per page\u0026rsquo; level, knowing that either one would be OK for this sample app.\nAt the top of your SearchMarvel.razor page, after the @page line, add the following: 1 @rendermode InteractiveWebAssembly which tells the page/component should use the Interactive Render Mode, which \u0026ldquo;enables\u0026rdquo; the button event in our case.\nSave the changes, and run the app; enter a Marvel character name in the search field, click the Find Button and notice the search field string is shown in the console now. This is what InteractiveWebAssembly Render Mode is doing\u0026hellip; I think the bare minimum app layout development is ready, so itâ€™s time to set up the Marvel API-part of the solution in the next section. Configuring the Marvel Developer API Backend Code interaction Head over to the Marvel Developer website https://developer.marvel.com and grab the necessary API information. Select Create Account + Accept Terms \u0026amp; Conditions\nGrab the API keys (public \u0026amp; private)\nPublic: 579a41c9eccaf70a3a09c1xxxxxxxxxxx\nPrivate: 6362bd53a4c307c96fb27xxxxxxxxxx\nTo allow requests to come into the Marvel API back-end, you need to specify the source URL domains where the requests are coming from. Add localhost here, which is the URL you use for all testing on your development workstation. Later on, once the app runs in Azure, you need to add the Azure Service URL here as wellâ€¦\nOnce set up, head over to â€œinteractive documentationâ€, and walk through the different API placeholders and keywords one can use, to show the capabilities. For the app later on, we will use the â€œnamestartswithâ€, as it is the most easy to use â€“ names could work, but it requires knowing the explicit name of the character, and having it correctly spelled. Click the â€œTry it outâ€ button. The result shows the outcome + the exact URL that was used: Blazor WebAssembly doesn\u0026rsquo;t come with the HTTPClient package installed by default, which means we need to add the Nuget package for this. (Although if you want, you could also find Nuget packages that provide similar functionality), as well as specifying the necessary Service, in both the server-side and client-side project.\nFrom the app.client project, select Manage Nuget Packages, and search for Microsoft.Extension.Http as package name.\n.\nOnce the package got installed, update the Program.cs file in the client project, by adding the following line below the \u0026ldquo;var builder = WebAssemblyHostBuilder\u0026hellip; line: 1 builder.Services.AddScoped(sp =\u0026gt; new HttpClient { BaseAddress = new Uri(\u0026#34;https://gateway.marvel.com:443/v1/public\u0026#34;) }); Next, open the Program.cs on the server-side project, and add the following line to the //Add Services to the container section: 1 builder.Services.AddHttpClient(); Next, using .NET dependency injection, create a reference to the HTTPCLient in your Blazor SearchMarvel.razor page 1 2 3 4 5 6 7 8 @page \u0026#34;/\u0026#34; @inject HttpClient HttpClient \u0026lt;h1 class=\u0026#34;text-center text-primary\u0026#34;\u0026gt; Blazor Marvel Finder\u0026lt;/h1\u0026gt; \u0026lt;div class=\u0026#34;text-center\u0026#34;\u0026gt; As you could see from the Marvel output, they are using JSON; this means, when calling the HttpClient, we also receive a JSON object back. To \u0026lsquo;recognize\u0026rsquo; the data from the JSON response into our Web App, we need to link it to a data model, using a Csharp class. There\u0026rsquo;s a few different ways to do this, either creating it manual, using the Visual Studio \u0026lsquo;Edit - Paste Special - as JSON\u0026rdquo;, which will create the necessary Class setup for you. However, in this specific scenario, I don\u0026rsquo;t need all the details from the JSON Response (although you could definitely update the app yourself to display all the information you want, related to a Marvel Character\u0026hellip;)\nTo help transforming a JSON Response into an actual C-Sharp Class, I often rely on a free website, https://www.jSON2CSharp.com, which allows for pasting in a JSON payload, which then gets converted to c# class structure\nIn the VStudio app.client project, create a new subfolder â€œModelsâ€, and add a new Item in there, called MarvelResult.cs We could copy the content from the JSON deserialize output into this class object, but for this sample, we donâ€™t need all the provided data by Marvel â€“ so I made some changes and ended up with the core pieces of data I want, like image, name, description The code snippet I\u0026rsquo;m using for this example, looks like follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 { public class MarvelResult namespace BlazorMarvel8.Models { public class MarvelResult { public string AttributionText { get; set; } public Datawrapper Data { get; set; } public class Datawrapper { public List\u0026lt;Result\u0026gt; Results { get; set; } } public class Result { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public Image Thumbnail { get; set; } public class Image { public string Path { get; set; } public string Extension { get; set; } } } } } } With the class in place, letâ€™s update the SearchMarvel.razor, to make sure it recognizes the model class MarvelResult. To do this, we need to add a private MarvelResult, reflecting the data class we just created; 1 private MarvelResult _marvelResult; As we stored the MarvelResult.cs class model in a different folder within the application source code, we also need to update our Page details, telling it to â€œuseâ€ the Models subfolder to find it. This is done using the @using statement on top of the Home.razor page: 1 2 3 4 5 @using BlazorMarvel8.Models @page \u0026#34;/\u0026#34; @inject HttpClient HttpClient Where now the Class gets nicely recognized\nLetâ€™s update the Task FindMarvel, with the required code snippet to recognize the dynamic URL to connect to, as well as calling the HttpClient function. As per the Marvel API docs, we also need to integrate the api Public key into our URL search string, so we have to define the string for this first. Btw, the full Request URL to use is visible from the Interactive Documentation page where we ran the \u0026rsquo;try it now\u0026rsquo; search task (https://gateway.marvel.com:443/v1/public/characters?nameStartsWith=spider\u0026apikey=579a41c9eccaf70a3a09c1722ef6c2fc)\nThe updated code snippet looks like this now:\n1 2 3 4 5 6 7 8 9 @code { private MarvelResult _marvelResult; private string whotofind; private string MarvelapiKey = \u0026#34;579a41c9eccaf70a3a09c1722ef6c2fc\u0026#34;; After which we can update the Task FindMarvel as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 private async Task FindMarvel() { Console.WriteLine(whotofind); var url = $\u0026#34;characters?nameStartsWith={whotofind}\u0026amp;apikey={MarvelapiKey}\u0026#34;; _marvelResult = await HttpClient.GetFromJsonAsync\u0026lt;MarvelResult\u0026gt;(url, new System.Text.Json.JsonSerializerOptions { PropertyNamingPolicy = System.Text.Json.JsonNamingPolicy.CamelCase }); } While all the code pieces are done, note that as of .NET6, it started checking for Nullable values. This is what the green squickly lines are identifying. What this means is that the value could be equal to null, which could potentially break your application, since it expects to have a real value in there. I wouldnâ€™t recommend it to change, but for this little sample app, it would be totally OK to disable the nullable check. This can be done from the Properties of the Project Render JSON Response data into HTML Layout Thatâ€™s all from a code snippet perspective, where now the last piece of updates is back into the HTML Layout of the web page itself, updating the content of the card object: Since we most probably get an array of results back, meaning more than one, we need to go through a â€œfor eachâ€ loop; also, there might be scenarios where we are not getting back any results (like the character doesnâ€™t exist, a typo in the characterâ€™s name,â€¦), so we will add a little validation check on that too, using an if = !null Letâ€™s go ahead!\nAt the top of the card object (class=container), or right below the section where we defined the search button, insert the @if statement, and move the whole div section between the curly brackets, updating the fixed fields we defined earlier, with the MarvelResult class objects: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 @if (_marvelResult != null) { \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;row row-cols-1 row-cols-md-2 row-cols-lg-3\u0026#34;\u0026gt; @foreach (var result in _marvelResult.Data.Results) { \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;@($\u0026#34;{result.Thumbnail.Path}.{result.Thumbnail.Extension}\u0026#34;)\u0026#34; class=\u0026#34;card-img-top\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card-body\u0026#34;\u0026gt; \u0026lt;h5 class=\u0026#34;card-title\u0026#34;\u0026gt;@result.Name\u0026lt;/h5\u0026gt; \u0026lt;p class=\u0026#34;card-text\u0026#34;\u0026gt; @result.Description \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; } \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; } Run the app and see the result in action Thatâ€™s it for now. Great job! Making the cards â€˜flipâ€™ Note: this part is left out of the ScifiDevCon presentation to keep the video within the expected time â€“ what weâ€™re doing here is integrating more CSS layout components on to a new Page in the web app, which provides a more dynamic look-and-feel to the Marvel cards we have.\nWhile CSS can be difficult â€“ and trust me it is â€“ I literally googled for â€œflipping cards CSSâ€ and found a snippet of code on https://w3schools.com, and it worked almost straight awayâ€¦\nHere we go:\nLetâ€™s copy the current state of the page we have, and store it in a different page; so we grab SearchMarvel.razor and copy/paste it to FlipMarvel.razor this will allow me to also demonstrate some other Blazor features around Menu Navigation and how to use object-specific css; meaning, CSS that will only be picked up by the specific page, and not interfere with the rest of the application CSS we already have.\nOpen FlipMarvel.razor page; First thing we need to change, is the Page Routing, pointing to the â€œ/flipâ€ routing directory instead of the â€œ/â€, as that one is linked to the index.razor page.\nGo to this link: https://www.w3schools.com/howto/tryit.asp?filename=tryhow_css_flip_card\nSelect the code between the tags\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 \u0026lt;style\u0026gt; body { font-family: Arial, Helvetica, sans-serif; } .flip-card { background-color: transparent; width: 300px; height: 300px; perspective: 1000px; } .flip-card-inner { position: relative; width: 300px; height: 300px; text-align: center; transition: transform 0.6s; transform-style: preserve-3d; box-shadow: 0 4px 8px 0 rgba(0,0,0,0.2); } .flip-card:hover .flip-card-inner { transform: rotateY(180deg); } .flip-card-front, .flip-card-back { position: absolute; width: 300px; height: 300px; -webkit-backface-visibility: hidden; backface-visibility: hidden; } .flip-card-front { background-color: #bbb; color: black; } .flip-card-back { background-color: #2980b9; color: white; transform: rotateY(180deg); } \u0026lt;/style\u0026gt; and paste this under the @using section and the section of the code you already have (Note: ignore the @using marveltake2.models in the screenshot, itâ€™s the name of my test project) Next, we need to update the layout of the card item itself, in the section within the â€œforeachâ€ loop, as thatâ€™s where the data is coming in, and getting displayed @foreach(var result in _marvelResult.Data.Results)\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 { \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card-inner\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card-front\u0026#34;\u0026gt; \u0026lt;img class=\u0026#34;thumbnail\u0026#34; src=\u0026#34;@($\u0026#34;{result.Thumbnail.Path}.{result.Thumbnail.Extension}\u0026#34;)\u0026#34; style=\u0026#34;width:300px;height:300px;\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;flip-card-back\u0026#34;\u0026gt; \u0026lt;h5\u0026gt;@result.Name\u0026lt;/h5\u0026gt; \u0026lt;p\u0026gt; @result.Description \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; } What we do here is basically pointing to the different CSS-snippets for each style we want to get applied; we have the flip-card div class, next the flip-card-inner and flip-card-front. For the front, we want to use the image, so we keep the img class details as is, but change the width and height to 300px, to make sure it looks like a nice rectangular on screen.\nNext, we add a class for the flip-card-back, where we will show the Marvel character name and description.\nThatâ€™s all we need to have for now; so letâ€™s have a look, by launching the app\nSince the previous page was index.razor, itâ€™s getting loaded by design (from the index.html). so we need to update the URL to pick up the /flip page, by adding it to the end of the URL, such as https://localhost:7110?flip (note, the port number will be different on your end) Search for a character, and see the outcome cards: About the same as before, but letâ€™s hoover over a card: It flips and shows the character name and description (if provided by Marvel) to the back of the card Cool!!\nLetâ€™s switch back to the code and add a menu item for the â€œflipâ€ page to our left-side navigation menu. Open the file NavMenu.razor within the Server-Side Layout folder. Add a new section for this menu item, by copying one from above + make minor changes to the href reference (flip) and change the Menu item word to Flip The icons are coming from the open iconic library, which is also referenced as part of Blazor bootstrap. Know you can change to MudBlazor, Telerik Progress or several other bootstrap frameworks to have layout-rich styles. Open https:://useiconic.com and find a suitable icon, for example loop-circular 1 2 3 4 5 6 7 8 9 10 \u0026lt;div class=\u0026#34;nav-item px-3\u0026#34;\u0026gt; \u0026lt;**NavLink** class=\u0026#34;nav-link\u0026#34; href=\u0026#34;flip\u0026#34;\u0026gt; \u0026lt;span class=\u0026#34;oi oi-loop-circular\u0026#34; aria-hidden=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/span\u0026gt; Flip \u0026lt;/**NavLink**\u0026gt; \u0026lt;/div\u0026gt; When you run the app again, the new Menu item will appear. Given the href=â€flipâ€, it will redirect to the base URL (https://localhost:7110) /flip route Since we are changing the layout a bit here, why not modify the default purple color from the Blazor template, to the well-known Marvel dark-red?\nOpen MainLayout.razor Notice the Paste in the following style object: 1 \u0026lt;div style=\u0026#34;background-image:none;background-color:darkred;\u0026#34; class=\u0026#34;sidebar\u0026#34;\u0026gt; This changes the default purple color to darkred. This completes our development part. Letâ€™s move on to the next step, and integrate our app code with GitHub Source Control (which actually should have happened at the start, before writing a single line of code â€“ but hey, itâ€™s a sample scenario right) Integrating Visual Studio with GitHub Source Control With that, letâ€™s close this project and save it to GitHub; so you can grab it as a reference. From the explorer, click â€œGit changesâ€ tab and select Create GitHub Repository Click Create and Push, and provide a description as commit message (I typically call this first action the â€œinitâ€).\nWait for the git clone action to complete successfully. Connect to the GitHub repository and confirm all source code is there.\nNote: the actual source code I used for the Festive Tech Calendar presentation can be found here: petender/FestiveBlazor2022live (github.com)\nWhenever you would make changes in the source code in Visual Studio and save the changes, Git Source Control will keep track of these and allowing you to commit the changes into the GitHub repository. I would recommend you to commit changes frequently, basically after each â€œimportantâ€ update to the code. Publish Blazor Web Assembly app to Azure Static Web Apps In this last section, I will show you how to publish this webapp to Azure Static Web Apps, a web hosting service in Azure for static web frameworks like Blazor, React, Vue and several other.\nFrom the Azure Portal, create new resource / static web app\nProvide base information for this deployment:\nResource group â€“ any name of choice\nName of the app â€“ any unique name for the app\nSource = GitHub\nPlan = Free\nRegion = any region of your choice\nScroll down and authenticate to GitHub; Next, select your source repo in Github where the code is stored (the one we just created)\nClick Build Details to provide more parameters regarding the Blazor app itself. Note you need to change the default App location from /Client to /, since our source code is in the root of the Blazor Web Assembly, without using ASP.Net hosted back-end.\nOnce published, it will trigger a GitHub Actions pipeline to publish the actual content\nThe YAML pipeline code is stored in the .github/workflows/ subfolder within the GitHub repository. You shouldnâ€™t need to update this file though. It just works out-of-the-box.\nCheck in Actions whatâ€™s happening:\nOpen the details for the Build \u0026amp; Deploy workflow\nSelecting any step in the Action workflow will show more details:\nWait for the workflow to complete successfully.\nNavigate back to the Azure Static Web app, click itâ€™s URL and see the Blazor Web App is running as expected.\nWhen searching for a Marvel Character, this throws an error though, which can be validated from the Inspect option of the browser:\nRemember at the start, where we configured the API calls at the Marvel Developer site, we needed to specify the source URLs from where the calls are allowed. This Azure Static Web App URL is not configured. (Hence why I didnâ€™t worry too much about including my APIKey as hard-coded string in the source code).\nClick Update to save those changes. Trigger a new search, which should reveal the actual Marvel character details. Remember you can use both the default (index) page, as well as the flip page.\nSummary In this article, I provided all the necessary steps to build a Blazor .NET 8 Web Assembly application. Started from the default template, you updated snippets of code to create a search field and corresponding action button to trigger the search. You learned about using HTTPClient to interact with an external API Back-End. Once this was all working, you looked into using some additional â€œflip cardâ€ CSS layout features, and how to update the Blazor Navigation Menu.\nOnce the development work was done, we saved the code in a GitHub repository.\nLast, you deployed an Azure Static Web App, interacting with the GitHub repository to pick up the source code and publish it using GitHub Actions workflow.\nI would like to thank the organizing team of ScifiDevCon 2024 for having accepted my session submission for the 3rd year in a row. Especially since this was my first attempt to do some (semi)live coding, to share my excitement of how I learned to write and build code at age 48. Iâ€™m already brainstorming on what Blazor app I can share in next yearâ€™s edition\u0026hellip;\n/Peter\n","date":"2024-05-18T00:00:00Z","permalink":"/post/building-a-marvel-hero-app-with-blazor-.net8/","title":"ScifiDevCon 2024 : Building a Marvel Hero App using Blazor and .NET8 "},{"content":"Hi Readers,\nThis post is merely for myself, documenting all the steps I needed to go through to be able to send emails from a PowerShell script, as part of an Azure DevOps YAML pipeline flow. And since I was documenting it for our internal application, I thought it would hopefully help someone out there looking for a similar solution.\nWhy not using the built-in Email Notification system Azure DevOps provides email notification features, but this comes with a few assumptions. The biggest one being the receiver(s) should be Project Members. Which was not the case in my scenario, since I want to send out deployment status emails, at the start of a pipeline and when completed successfully, to external recipients.\nWhat\u0026rsquo;s wrong with Marketplace Extensions? In my Classic Release Pipelines, I relied on Sendgrid, and it actually worked OK, and with the limited amount of emails (up to 100/daily), it was a free service on top.\nSince I was migrating the full project away from Classic Release Pipelines to YAML, and wanted to customize the email flow a bit more, I started looking into other options. Keeping all functionality within my Azure subscription, was also a benefit.\nThe new setup architecture In short, the new setup architecture is based on the following:\nAzure LogicApp with HTTP Trigger, sending an Outlook 365 email based on parameters to an external recipient\nADO Release Pipeline using YAML tasks\nOne of the tasks is using Azure Powershell, to compose and send the email\nLet\u0026rsquo;s detail each step a bit more.\nDeploying and composing the Azure Logic App The starting point is deploying the Azure Logic App, and composing the workflow steps.\nDeploy an Azure Logic App resource using the Consumption plan (pay per trigger)\nOnce deployed, open the Logic App Designer, and choose **When an HTTP request is received\u0026quot; as trigger.\nWhen it asks for a JSON Schema in the Request Body, you can use the \u0026lsquo;use sample payload to generate schema\u0026rsquo;, by entering the different parameters you need for the email details itself, in the JSON format. In my setup, I have the following fields:\nMessage, which is the actual body of the email, and gets send along as a PowerShell script object; Subject, which contains the actual subject of the email, and also coming from the PowerShell script object; To, which contains the recipient\u0026rsquo;s email address, and passed on from the PowerShell script object; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 { \u0026#34;properties\u0026#34;: { \u0026#34;Message\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; }, \u0026#34;Subject\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; }, \u0026#34;To\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;object\u0026#34; } When you save the Logic App with this step, it will provide you with the unique LogicApp HTTP Trigger URL to connect to. Save this URL on the side, since you need it later in the PowerShell script.\nAdd a new step to the workflow, choosing Send an Email. Authenticate with your Office 365 credentials (although other security options such as Service Principal might be required in your production setup).\nThe required fields for this step are Body, Subject and To. Mapping with the 3 property fields from the HTTP Trigger. Note, I created the property \u0026ldquo;Message\u0026rdquo;, instead of calling it \u0026ldquo;Body\u0026rdquo;, since the full HTTP Trigger JSON response we get in through the PowerShell script, is known as \u0026ldquo;Body\u0026rdquo;. So I made some mistakes there initially in not getting the correct information mapped as expected.\nThe nice thing with Logic Apps, is that any next step in the workflow, can reuse input values from any previous step.\nFor the Body-parameter of the Send an email step, we want to use the \u0026ldquo;Message\u0026rdquo; content from the HTTP Trigger response. Therefore, click in the Body-field, which will open the Dynamic content window. From here, it will show all known properties from the \u0026ldquo;When an HTTP request is received\u0026rdquo; step in the workflow.\nSelect the necessary fields from the Dynamic content, and map them with the required fields of the Send an Email task: 1 2 3 Body -\u0026gt; Message Subject -\u0026gt; Subject To -\u0026gt; To This is all for now to get the Logic App configured.\nThe PowerShell script for sending emails The idea is that the PowerShell script gets triggered from the YAML pipeline task. I initially tried using the Inline option, but that didn\u0026rsquo;t recognize the here-strings (gets explained later if you don\u0026rsquo;t know what this means\u0026hellip;) as well as some other limitations. I also didn\u0026rsquo;t like having the actual Message/Body details - for which I\u0026rsquo;m using HTML and tables - in my YAML pipeline. So using the ScriptPath option was much cleaner. And also allowing a more flexible scenario, where I could call different \u0026lsquo;sendemail\u0026rsquo;.ps1 scripts, depending on the specifics of the pipeline tasks.\nLet me first explain the high-level setup of the script:\ndefine param settings, capturing/transferring the parameters from the YAML task ScriptArguments create a variable for the LogicApp HTTP Trigger Url create a variable for the LogicApp HTTP Trigger Body data, holding the \u0026ldquo;To\u0026rdquo;, \u0026ldquo;Subject\u0026rdquo; and \u0026ldquo;Message\u0026rdquo; content create a variable for the LogicApp HTTP Trigger Headers information create a variable for the actual Invoke-RestMethod, sending all previous info along Here are some of the snippets for each component of the script:\n1. define param settings, capturing/transferring the parameters from the YAML task ScriptArguments\n1 2 3 4 5 param ( [string]$BuildDefinitionName, [string]$To ) 2. create a variable for the LogicApp HTTP Trigger Url\n1 $logicappUrl = \u0026#34;https://yourlogicappurl .centralus.logic.azure.com:443/workflows/1676b43cc2904b/triggers/manual/paths/invoke?api-version=2016-10-01\u0026amp;sp=%2Ftriggers%2Fmanual%2Frun\u0026amp;sv=1.0\u0026amp;sig=OK-Z_dgspUufMWMlBL531\u0026#34; 3. create a variable for the LogicApp HTTP Trigger Body data, holding the \u0026ldquo;To\u0026rdquo;, \u0026ldquo;Subject\u0026rdquo; and \u0026ldquo;Message\u0026rdquo; content\n1 2 3 4 5 6 7 8 9 10 11 $body = @{ To = $To Subject = @\u0026#34; MTTDemoDeploy - $BuildDefinitionName - Deployment Kicked Off Successfully \u0026#34;@ Message = @\u0026#34; \u0026lt;actual email body content here - where I used a combination of text, HTML and CSS for the email layout\u0026gt; \u0026#34;@ } Note the usage of the @\u0026quot; \u0026ldquo;@, which is known as here-strings. Here-strings allow you to define multiline strings without needing to escape special characters or use concatenation.\nIn this code snippet, I\u0026rsquo;m using a here-string to define the value of the Subject property in the $body hash table. The @\u0026rdquo; at the beginning indicates the start of the here-string. The text within the here-string (between the @\u0026quot; and the closing \u0026ldquo;@) is preserved exactly as written, including line breaks and any special characters. The variable $BuildDefinitionName is interpolated within the here-string, and corresponds to a param object as defined at the start of the script. This will hold the actual ScriptArguments object from the YAML pipeline steps later.\nAlso note the positioning of the \u0026ldquo;@ all the way at the start of the line, as the here-strings cannot recognize and spaces or tabs before it - this will throw an error when running the script. (Told you I really wanted to document all by observations and issues before I got this working smoothly\u0026hellip;)\n4. create a variable for the LogicApp HTTP Trigger Headers information When you configured the HTTP Trigger step in the LogicApp, it shows a little popup message, saying the trigger expect the Header to have the content type of application/json. this is how you specify this.\n1 2 3 $headers = @{ \u0026#39;Content-Type\u0026#39; = \u0026#39;application/json\u0026#39; } 5. create a variable for the actual Invoke-RestMethod, sending all previous info along\n$sendemail = @{ Uri = $logicappUrl Method = \u0026lsquo;POST\u0026rsquo; Headers = $headers Body = $body | ConvertTo-Json }\nInvoke-RestMethod @sendemail\nEventually, save this file as start_email.ps1 or other filename that works for you, and make sure it is part of the ADO Repo where you have the YAML pipeline. To keep it a bit structured, I created a new subfolder Email with another subfolder .ado in which I stored the file.\nREPO-ROOT /Email /.ado /start_email.ps1\nI call this script at the start of the YAML pipeline, but also created a finish_email.ps1, which I\u0026rsquo;m calling at the end of the YAML pipeline successful completion.\nThe YAML pipeline task With both LogicApp and PowerShell script created, the last step in the process is defining the YAML PowerShell task which sends the trigger and necessary parameters to the PowerShell script.\nLike any YAML Pipeline, this is just another task, relying on some variables and task settings.\nI created 2 new variables, one for the EmailDomain and one for the full EmailAddress:\nvariables:\nname: pipelineName value: $(Build.DefinitionName) name: EmailDomain value: \u0026lsquo;@company.com\u0026rsquo; name: recipientEmail value: \u0026ldquo;${{parameters.User}}`$(EmailDomain)\u0026rdquo; Critical here is the usage of the ` in the 2nd part of the value setting. Since the @ in the emaildomain is breaking the string, known as \u0026ldquo;splatting\u0026rdquo;, I needed to add that character to avoid this issue.\nNext, within the stage / jobs / steps level of the YAML pipeline, I inserted the following task:\n1 2 3 4 5 6 7 8 9 10 11 steps: - task: AzurePowerShell@5 displayName: \u0026#39;Email - Deployment Kicked Off\u0026#39; inputs: azureSubscription: \u0026#39;\u0026lt;yourADOServiceConnection\u0026gt;\u0026#39; resourceGroupName: \u0026#39;RG where you deployed the LogicApp\u0026#39; azurePowerShellVersion: \u0026#39;LatestVersion\u0026#39; #required to use the latest version of Azure PowerShell ScriptType: \u0026#39;filePath\u0026#39; ScriptPath: $(System.DefaultWorkingDirectory)/Email/.ado/start_email.ps1 ScriptArguments: \u0026#39;-BuildDefinitionName:$(pipelineName) -To:$(recipientEmail)\u0026#39; The ScriptArguments is the crucial step if you ask me, as it contains the different parameters you want to pass on to the PowerShell script; also, the parameters needed to map with the JSON properties in the HTTP Trigger step of the Logic App. (again, I needed multiple attempts to get all this working, hence my documentation on how I got this all glued together\u0026hellip;)\nIt picks up a parameter called BuildDefinitionName, corresponding to the same param-name at the start of the PowerShell script, which contains the value of the variable pipelineName I defined earlier in the YAML pipeline. The 2nd parameter I\u0026rsquo;m passing on is the To-field, which corresponds with the composed recipientEmail variable in the YAML pipeline.\nThat\u0026rsquo;s pretty much it!!\nFYI, it it also possible to send more YAML pipeline parameters or variables along with the ScriptArguments, such as deployment output or passwords or any other. For example, I have a pipeline where I\u0026rsquo;m creating an Azure Container Registry with a random name, as well as a unique created password for the admin.\n1 echo \u0026#34;##vso[task.setvariable variable=acrname]$acrname\u0026#34; which I then send on to the PowerShell script from the ScriptArguments like this:\n1 ScriptArguments: \u0026#39;-BuildDefinitionName:$(pipelineName) -To:$(recipientEmail) -acrname:$(acrname) -location:${{ parameters.Location }} -adminPassword:\u0026#34;$(genPassword)\u0026#34;\u0026#39; #needs quotes because split characters Summary Apart from the built-in ADO notification emails for the most \u0026lsquo;common\u0026rsquo; scenarios when using pipelines, your DevOps project might need other emails to be sent, as part of your pipeline flow. Where you could use existing Marketplace tools such as SendGrid or other, I decided to come up with my own PowerShell-based script, interacting with Azure LogicApps.\nWhile the setup involves jumping to several hoops, it is not all that difficult (easy to say once it all works, after spending half a day of troubleshooting at different levels - of which the main one was my rusty PowerShell skills\u0026hellip;) once all pieces fall in place.\nLet me know if you want to give this a try and share me your results!\nCheers!!\n/Peter\n","date":"2024-03-03T00:00:00Z","permalink":"/post/sending-emails-from-ado-pipelines-using-powershell/","title":"Sending Emails from Azure DevOps using PowerShell and Azure LogicApps"},{"content":"Hugo\nAzure Static Storage Sites\nWelcome to this year\u0026rsquo;s Festive Tech Calendar!! Hi everyone, welcome to my contribution to this year\u0026rsquo;s Festive Tech Calendar once more. This will be the fourth year, and I still love the concept of bringing some (Azure) joy to you/your family this season. If you ask me what the biggest news in tech was this year, especially within the Microsoft ecosystem, it\u0026rsquo;s Azure AI. Shouldn\u0026rsquo;t be surprising to most of you who know me and my role within Microsoft as Technical Trainer - providing Azure workshops every week to our top customers and partners across the globe - we also integrated (A lot :)) of AI focus early in the year. And it is only becoming more imported.\nTherefore, I decided to bring you an Azure AI-inspired topic for this year\u0026rsquo;s Festive Tech Calendar session, using Computer Vision - Document Intelligence, or what I describe as OCR on steroids.\nIn the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a \u0026ldquo;Statistical Machine\u0026rdquo; for searching microfilm archives using an optical code recognition system, which evolved into an IBM OCR solution.\nabout 100 years after, OCR is transitioning into powerful document and text analysis capabilities, thanks to Azure AI Document Intelligence APIs.\nBy reading through this post, and following the demo-steps, you can build your own \u0026ldquo;Statistical Machine\u0026rdquo; in no-time. And from there, learn about Azure AI Document Intelligence APIs, to take it even further\u0026hellip; let\u0026rsquo;s go!\nTo embrace our Azure AI and Microsoft Copilot even more myself, I actually used it to create (parts) of this blog post. What a wonderful world we live in today!\nAbout 2 weeks ago, I presented a session on the same topic for the GlobalAI Community Conference, led by Sjoukje Zaal, Amy Kate Boyd and Henk Boelman, for which the video is available On Youtube\nSo instead of creating a similar video, I worked with Azure AI, Microsoft Copilot and my own notes, to produce this article. Let me know if you liked it\u0026hellip;\nIn this article, I will use the following flow:\nIâ€™ll start with setting the scene on Azure AI, using Computer Vision for OCR\nFollowed by the more advanced scenario, using Intelligent Document Processing or IDP\nLast, Iâ€™ll show you how you can train the IDP using your own custom models\nAnd I hope to do all this using several demosâ€¦ which you can go through in your own Azure subscriptions\nsetting the scene on Azure AI, using Computer Vision for OCR Computer Vision allows for different use cases, of which the most important ones are:\nImage Analysis â€“ typically used to detect objects or items Spatial Analysis â€“ is what you would use to detect people, like video cameras in a store OCR or Optical Character Recognition â€“ allows you to recognize text, both printed and handwritten Facial Recognition â€“ recognize human identity, without exposing privacy details Deploying Azure AI - Computer Vision If you donâ€™t already have one in your subscription, youâ€™ll need to provision an Azure AI Services resource. If you don\u0026rsquo;t have an Azure subscription yet, you can sign up for a [Free subscription](https://azure.microsoft.com/en-us/free).\nOpen the Azure portal at https://portal.azure.com, and sign in using the Microsoft account associated with your Azure subscription. In the top search bar, search for Azure AI services, select Azure AI Services, and create an Azure AI services multi-service account resource with the following settings: Subscription: Your Azure subscription Resource group: Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided) Region: Choose from East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US, or East Asia* Name: Enter a unique name Pricing tier: Standard S0 *Azure AI Vision 4.0 features are currently only available in these regions. Select the required checkboxes and create the resource. Wait for deployment to complete, and then view the deployment details. When the resource has been deployed, go to it and view its Keys and Endpoint page. This is what you would need from a development perspective in your application source code. I will show you an example on how to use a ComputerVision Docker Container later on which also needs those parameters to run successfully\u0026hellip; From the ComputerVision tab within Azure AI Services, create a new Computer Vision Resource, keeping most default settings as-is. Wait for deployment to complete. From the Overview section of Computer Vision, notice the Vision Studio button. Click the Open Vision Studio button to navigate to Azure AI Compute Vision Studio. From here, select Optical Character Recognition , and select Extract text from images Here, you can test the functionality of how text is getting recognized, using the sample images provided, or you can upload your own images as well. I would recommend you to try with handwritten notes as well, especially when your handwriting skills are as great as mine\u0026hellip; How does OCR recognition actually work? When we talk about Azure AI, it means using APIs, which allow you to bring in a source into the AI engine, from there processes a given scenario â€“ like read text in the case of OCR, and from there we call the result\nLooking at this example, and Iâ€™ll show you in a quick demo, each item of text gets moved into a box/a boundary, which gets translated into understandable characters -\u0026gt; forming words\nStep 1 involves creating a requestID Step 2 means reading out the results, for the specific requestID\nGoing back to the Vision Studio in the Azure Portal, where you uploaded or selected an image, and saw the outcome of how text gets recognized, select JSON. For each character or set of characters recognized as text, different JSON properties and values are getting created. These identify the boundaries of the text on the image. This is the core work of the Read API. So let\u0026rsquo;s have a more detailed look into that one for a second. The easiest I found to show this, is spinning up a Azure Cognitive Service Vision Docker Container.\nAssuming you know a bit about Docker containers, and you have Docker Engine running on your local machine, execute the following command:\n1 docker run --rm -it -p 5050:5000 --memory 4g --cpus 1 mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 Eula=accept Billing=https://yourcomputevisionresource.cognitiveservices.azure.com/ ApiKey=yourcomputevisionapikey Replacing the Billing and ApiKey with the correct values from the Computer Vision Keys picked up earlier.\nWith the Docker container running, open the browser to http://localhost:5050/status, which confirms the ReadAPI service is ready, and your API Key is valid Next, connecto to http://localhost:5050/swagger to interact with the different API endpoints of the Language Service running within the container From the swagger API interface, select POST in the Analyze section, and click Try it out. Scroll down a bit, and in the Request body section, provide a URL to an actual image. For example, you can use the sample image below, which is the same one available in the Vision Studio portal, showing the nutrition facts about some food item. 1 2 3 { \u0026#34;url\u0026#34;: \u0026#34;https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/printed_text.jpg\u0026#34; } Click the Execute button. This sends an API request and returns an analyze request ID from this URL: http://localhost:5000/vision/v3.2/read/analyze Copy the request id, and open it in the browser, e.g. http://localhost:5050/vision/v3.2/read/analyzeResults/339da9a7-aa6b-4c81-a5d0-5840448fdfaf\nThis brings up the full JSON structure of the analyzed text, including text snippets, bounding boxes,â€¦\nWhile quite impressive if you ask me, we had OCR technology doing almost the same, for the last 50-60 years already. Anyone remembers copiers and scanners, saving to a PDF document? Basically based on the sameâ€¦\nSo letâ€™s focus a bit more on the next level of OCR, using Intelligent Document Processing\nUsing Intelligent Document Processing or IDP Similar to the Computer Vision Read API and Vision Studio, Azure AI also provides the Form Recognizer Service , which got recently renamed to Intelligent Document Processing or IDP.\nDocument Intelligence Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Document Intelligence prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, Health insurance card, W2 in addition to custom models.\nOptical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It includes features like higher-resolution scanning of document images for better handling of smaller and dense text; paragraph detection; and fillable form management. OCR capabilities also include advanced scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.\nFrom the Azure AI Portal, navigate to Document Intelligence, and create a new resource within. Default settings should be ok as-is. Once deployed, from the Overview section, notice the Document Intelligence Studio option, and open it. In the previous examples, the content was coming from an image-file type (jpeg,â€¦). Where sometimes, we have more specific data types, such as forms, receipts, invoices, passport,â€¦\nThis is where the compute vision text analyzer is not finetuned enough. Thatâ€™s where we will use the form recognizer service, now known as document intelligence service.\nAzure AI Document Intelligence is a cloud-basedÂ Azure AI serviceÂ that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Document Intelligence enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation.\nDocument Intelligence recognizes 3 different models:\nDocument Analysis - enable text extraction from forms and documents and return structured business-ready content ready for your organization\u0026rsquo;s action, use, or progress. Prebuilt models - Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. From the Document Intelligence Studio, select a prebuilt model type of choice. I\u0026rsquo;ll use Invoice but the approach is the same for the other ones. Click Run Analysis As you can see, the different text items from the document (Invoice) are getting identified and tagged with a label. Similar to before, the technical output is stored in a JSON file. How to train the IDP using your own custom models Custom models - Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models. Document Intelligence uses advanced machine learning technology to identify documents, detect and extract information from forms and documents, and return the extracted data in a structured JSON output. With Document Intelligence, you can use document analysis models, pre-built/pre-trained, or your trained standalone custom models.\nCustom models now include custom classification models for scenarios where you need to identify the document type prior to invoking the extraction model. Classifier models are available starting with the 2023-07-31 (GA) API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create composed models.\nCustom document models can be one of two types, custom template or custom form and custom neural or custom document models. The labeling and training process for both models is identical, but the models differ as follows:\nCustom extraction models To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.\nCustom neural model The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you\u0026rsquo;re choosing between the two model types, start with a neural model to determine if it meets your functional needs. See neural models to learn more about custom document models.\nThe custom template or custom form model relies on a consistent visual template to extract the labeled data. Variances in the visual structure of your documents affect the accuracy of your model. Structured forms such as questionnaires or applications are examples of consistent visual templates.\nYour training set consists of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields, and regions. Template models and can be trained on documents in any of the supported languages. For more information, see custom template models.\nIf the language of your documents and extraction scenarios supports custom neural models, we recommend that you use custom neural models over template models for higher accuracy.\nFrom Document Intelligence Studio, scroll down and select Custom Models. Next, select Custom Extraction Model\nFrom here, create a new project\nYou need to specify where your document model sources (e.g. I am using my electricity bills, but could be receipts, delivery notes, business forms,\u0026hellip;) can be found in Azure Blob Storage, but apart from that, all settings should be clear I hope.\nOpen the project you created, from where it allows you to upload sample files. Upload 5-10 identical documents, as that is what is required to train the model\nYou can choose to use the Auto Label feature, or Draw Region to establish the different labels for the different text sections of your documents.\nOnce done for several identical documents, click Train. Which allows you to create an AI Data Model.\nLast, you can run a test to validate how a new custom document gets analyzed and all items in the document will get recognized.\nSummary In this article, I wanted to introduce you to the exciting world of Azure AI, and more specifically how Computer Vision can be used to recognize text in images, known as OCR, but even more so, how Azure AI Document Intelligence API allows to bring in more advanced document template recognition into your applications.\nThanks to the Azure Festive Tech Calendar team for having me! Take care and Happy Holidays!\nCheers!!\n/Peter\n","date":"2023-12-23T00:00:00Z","permalink":"/post/festive-2023-ocr-on-steroids/","title":"Festive Tech Calendar - Azure AI - OCR on Steroids"},{"content":"In this post, I want to share my review of another Azure book I read recently, Azure Architecture Explained this time from Brett Hargreaves and David Rendon published by Packt Publishing and available on Amazon as well as other e-book subscription platforms.\nApart from the great content, it was nice to see one of my own Microsoft Technical Trainer Team colleagues, Sarah Kong, providing the foreword.\nAbout the book (from the cover) This book provides you with a clear path to designing optimal cloud-based solutions in Azure, by delving into the platform\u0026rsquo;s intricacies.\nYouâ€™ll begin by understanding the effective and efficient security management and operation techniques in Azure to implement the appropriate configurations in Microsoft Entra ID. Next, youâ€™ll explore how to modernize your applications for the cloud, examining the different computation and storage options, as well as using Azure data solutions to help migrate and monitor workloads. Youâ€™ll also find out how to build your solutions, including containers, networking components, security principles, governance, and advanced observability. With practical examples and step-by-step instructions, youâ€™ll be empowered to work on infrastructure-as-code to effectively deploy and manage resources in your environment.\nBy the end of this book, youâ€™ll be well-equipped to navigate the world of cloud computing confidently.\nWhat this book covers The book has 14 chapters, about 400 pages in total (!!!), organized in 3 different \u0026lsquo;Parts\u0026quot;:\nPart I - â€“ Effective and Efficient Security Management and Operations in Azure This first section lays out the Identity Foundation for hybrid cloud, touching on Azure Active Directory and Microsoft Entra. I guess Dave and Brett were in the middle of the writing process, when Microsoft decided on the name-change of the Azure Identity platform from AAD to Entra ID, which is totally acceptable and not bothering me while reading through the content. It got emphasized several times (now Entra ID), and after page 3, you\u0026rsquo;re used to the new name. The mixed use of Azure Active Directory and Entra ID remains in chapter 2, which provides a more deep-dive on the typical administrative and architectural side of what it takes to get started with Entra ID from scratch, as well as how to deal with hybrid Identity when running an on-premises Active Directory scenario.\nAfter going through Chapters 1 and 2, this first part is closing with the positioning of Microsoft Sentinel, with a focus on mitigating lateral movement, which also looks at a possible security breach scenario with suscpicious Office 365 user sign-ins. Which I think is a great scenario, since most Azure customers are probably Office 365 customers as well - or the other way around.\nPart II - Architecting Compute and Network Solutions Part II is the biggest chunk of the book, and covers A LOT. Starting with Data Solutions, it provides insights on Azure Storage Accounts, Azure SQL and Azure Cosmos DB. From there, it switched to Virtual Machine migration, as well as App Services, and how to migrate data.\nThere is a - somewhat short in my personal opinion - topic on Azure Monitor, followed by a good portion of Azure Containers, covering both Azure Container Instance and Container Apps. (interesting enough, no details on Azure Kubernetes Services, which makes me believe the authors may have used the Az-305 Designing Microsoft Azure Infrastructure Solutions Study Guide as a guideline for what this book should cover, and what not\u0026hellip;)\nThe biggest chapter in this section is - understandable - Azure Networking, stretching over 60 pages, and not leaving any topic out of the context of Virtual Networking, Hybrid Networking using VPN and Azure WAN, covering Load Balancing scenarios, as well as Azure Firewall for protection. The protection/security focus moves over into Chapter 9, which expands on how to secure your applications, using Azure Front Door, Azure Application Gateway as well as VNET integration for App Services.\nPart III - Making the Most of Infrastructure-as-Code for Azure Shifts from Azure Solution architecting to Governance and DevOps, using Infrastructure as Code with Bicep, as well as pipeline-based deployments using Azure DevOps.\nThe last chapter, wraps up the content of the book, by sharing more Tips from the Field on governance, monitoring, Identity protection, networking and containers.\nMy Personal Feedback and observations As said earlier, this book covered a lot!! Which I think is its biggest benefit. I might be a bit biased having been an co-author of a similar Azure Architect-oriented book, as well as teaching the AZ-305 course for multiple years now. The content is great, but doesn\u0026rsquo;t specifically target Cloud Architects, since it also has several exercises/tasks in there. Which is OK for administrators, developers and devops teams, but not (always) something you expect a cloud Architect to still work on. Often, it turns into a hands-on how-to-do-something in Azure. This is fine, as it will help those personas who are wearing multiple heads at their organization (aren\u0026rsquo;t we all??), and often clarifies what got explained in the text, with additional how-to-guidance.\nWhat works best when going through this book, is approaching each topic as a stand-alone deep-dive on the subject. It covers the cloud-architectural design level up to a great detail, and brings it back to the administrative level.\nMaybe Packt (or the authors) should have chosen a different title, something like \u0026ldquo;Azure Resources Explained\u0026rdquo;, as I\u0026rsquo;m left a bit in the dark on the pure cloud architect questions, which typically cover business, non-technical and technical challenges when moving workloads to cloud, or deploying new ones as cloud-native. Which leaves me no other way to think of this as the publishing team was using the Azure AZ-305 exam as a lead for the majority of the content. And since that exam and certification is targeted towards cloud infrastructure architects, it is one of the best books I could recommend in helping with the preparation of studying and passing that exam. Even more so, if you are thinking of studying for the AZ-104 exam (Azure Administrator Associate), this book will also be more than a valid resource. And since AZ-104 is a prerequisite for the AZ-305 Architect credential, having all content crammed in a single book, is a double-win if you ask me!\nSummary I don\u0026rsquo;t see myself as the target audience for this book, since I live in Azure every day, yet still enjoyed reading the book page-by-page. The fact that it is this complete, stretching over a lot of the Azure resources and services, combining both Architect-like as well as Administrator-like content, makes this a great book to have on your shelf.\nPing me if you should have any additional questions.\nCheers!!\n/Peter\n","date":"2023-12-10T00:00:00Z","permalink":"/post/packt-book-review---azure-architecture-explained/","title":"Packt Book Review - Azure Architecture Explained"},{"content":"In this post, I want to share my review of another Azure book I read recently, Azure for Decision Makers this time from Jack Lee, Jason Milgram and David Rendon published by Packt Publishing and available on Amazon as well as other e-book subscription platforms.\nAbout the book Azure for Decision Makers provides a comprehensive overview of the latest updates in cloud security, hybrid cloud and multi-cloud solutions, and cloud migration in Azure. This book is a must-have introduction to the Microsoft Azure cloud platform, demonstrating the substantial scope of digital transformation and innovation that can be achieved with Azure\u0026rsquo;s capabilities.\nThe first set of chapters will get you up to speed with Microsoft Azure\u0026rsquo;s evolution before showing you how to integrate it into your existing IT infrastructure. Next, youâ€™ll gain practical insights into application migration and modernization, focusing mainly on migration planning, implementation, and best practices. Throughout the book, youâ€™ll get the information you need to spearhead a smooth migration and modernization process, detailing Azure infrastructure as a service (IaaS) deployment, infrastructure management, and key application architectures.\nThe concluding chapters will help you to identify and incorporate best practices for cost optimization and management, Azure DevOps, and Azure automation. By the end of this book, youâ€™ll have learned how to lead end-to-end Azure operations for your organization and effectively cost-optimize your processes â”€ from the planning and cloud migration stage through to troubleshooting.\nChapter Overview The book is light to read, and well-structured in 6 different chapters:\nChapter 1, Introduction, covers the reasons an organization might engage with cloud computing, and why Microsoft Azure in particular is a compelling choice. It goes on to discuss the various types of cloud environments and crucial security and governance considerations when migrating to the cloud.\nChapter 2, Modernizing with Hybrid, Multicloud, and Edge Computing, covers how these modernizing approaches can drive significant efficiency, agility, and innovation improvements for any organization. It also covers the set of tools Microsoft Azure provides for a modern, flexible, and secure infrastructure transition.\nChapter 3, Migration and Modernization, describes how the benefits of the cloud can help businesses stay ahead of the curve and drive innovation in their industry, as well as how an organization can accelerate its cloud adoption journey.\nChapter 4, Maximizing Azure Security Benefits for Your Organization, covers best practices for securing workloads on Microsoft Azure, Microsoft Sentinel as a tool for intelligent security analytics, and Microsoft Defender for Cloud, Identity, Endpoint, and Cloud Apps, each of which can help identify suspicious activity and prevent advanced attacks.\nChapter 5, Automation and Governance in Azure, explains the importance of automation and governance, discussing the two native Microsoft Infrastructure as Code (IaC) frameworks: Azure Resource Manager (ARM) templates and Bicep. Governance is a critical aspect of managing resources on Azure, and the tools and services available to facilitate this are also covered.\nChapter 6, Maximizing Efficiency and Cost Savings in Azure, discusses the impact of cost optimization in the cloud, and its place as one of the five pillars of the Microsoft Azure Well-Architected Framework. It goes on to explain the different ways that Azure Advisor can help organizations optimize Azure resources based on usage patterns, as part of a comprehensive cost optimization strategy.\nSummary Azure for Decision Makers covers why and how an organization can achieve a successful migration to the cloud. This book discusses different kinds of cloud solutions and describes how to make the best decisions when modernizing your organization by migrating to the cloud. Azure for Decision Makers ensures that you can make the most of the cost optimization, efficiency, automation, and security that a cloud solution with Microsoft Azure provides.\nCheers!!\n/Peter\n","date":"2023-10-22T00:00:00Z","permalink":"/post/packt-book-review---azure-for-business-decision-makers/","title":"Packt Book Review - Azure for Business Decision Makers"},{"content":"Hi y\u0026rsquo;all!\nOver the last 18 months, I\u0026rsquo;ve been developing a tool for our internal Microsoft Trainer team, allowing them to deploy trainer demo scenarios in Azure using a Blazor Front-End web app, connecting to Azure DevOps pipelines using REST API calls.\nAt the start of the project, Classic Release pipelines were still common, since YAML was too new, and rather unknown. However, over the last few months, more and more I was thinking of shifting from Classic to YAML Pipelines. But - although the app isn\u0026rsquo;t that big or complex - it means connecting to different API endpoints for YAML, reading out the status of ongoing pipelines would be different, and getting a listing of all deployed pipelines for a given trainer, would also be different.\nSo you hear me coming\u0026hellip; I don\u0026rsquo;t want to rewrite the whole app, as it also means rewriting all Classic Pipelines to YAML syntax. While I know the export to YAML is available, I didn\u0026rsquo;t want to minimize the effort, nor like the time-pressure. As in the end, what is the added value to the end-user? (If that is not spoken like a true developer, I have no idea\u0026hellip;)\nBeing more and more familiar with triggering DevOps Pipelines from REST API calls, I thought about the following\u0026hellip; what if I could create a Classic Release Pipeline, which triggers a YAML Pipeline?\nAfter searching around a little bit on how curl would help here, it seemed possible, since there is a Classic Pipeline Task for #BASH.\nThe YAML Pipeline components? In order to trigger the YAML Pipeline from the Classic Release, you need to capture the YAML Pipeline Id, as well as capturing any parameter values you need to provide.\nThe YAML Pipeline Id; this is also known as the definitionId, and can be found by going to the YAML Pipeline under the ADO Pipelines section, and checking the URL in the address bar - it will be something like https://dev.azure.com///_build?definitionId=101 Take note of the definitionId number, as you need it in the Bash script later. In my example, the YAML Pipeline has a section with parameters, where I\u0026rsquo;m using the trainer alias, the Service Connection/Azure Subscription info and the selected Azure Region for the deployment, like this: 1 2 3 4 5 6 7 8 9 10 parameters: - name: MTTAlias type: string default: petender - name: azureSubscription type: string default: MTTAliasServiceConnection - name: Location type: string default: westus Everything else in the YAML Pipeline is rather standard, triggering an Azure Bicep template deployment, using an Azure CLI Task and pointing to the Bicep file (/.azure/main.bicep), like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 variables: RunName: \u0026#34;${{parameters.MTTAlias}}\u0026#34; RGName: \u0026#34;MTTDemoDeployRG${{parameters.MTTAlias}}TBOOTH\u0026#34; dotnetFunctionZipPath: $(Build.ArtifactStagingDirectory)/dotnet nodeFunctionZipPath: $(Build.ArtifactStagingDirectory)/node stages : - stage: Infra jobs: - job: Bicep steps: - checkout: self - task: AzureCLI@2 name: deployBicep inputs: azureSubscription: ${{ parameters.azureSubscription }} scriptType: \u0026#39;pscore\u0026#39; scriptLocation: \u0026#39;inlineScript\u0026#39; inlineScript: | az group create --location ${{parameters.Location}} --name $(RGName) $out = $(az deployment group create -f ./FastCar-TollBooth/.azure/main.bicep -g $(RGName) --parameters namingConvention=\u0026#34;${{parameters.MTTAlias}}\u0026#34; location=${{parameters.Location}} -o json --query properties.outputs) | ConvertFrom-Json $out.PSObject.Properties | ForEach-Object { $keyname = $_.Name $value = $_.Value.value echo \u0026#34;##vso[task.setvariable variable=$keyname;isOutput=true]$value\u0026#34; } Composing The Classic Release Pipeline Create a new Classic Release Pipeline, with a single Stage \u0026ldquo;Azure Infra\u0026rdquo;, and add the #Bash Script as Task Select Inline as Type, and in the Script box, copy the following script, which does the following: PIPELINE_ID = the number of the YAML PipelineId 1 PIPELINE_ID=\u0026#34;101\u0026#34; URL = the URL of the actual DevOps YAML Pipeline REST API to use 1 2 URL=\u0026#34;$(SYSTEM.TEAMFOUNDATIONCOLLECTIONURI)$(SYSTEM.TEAMPROJECTID)/_apis/pipelines/$PIPELINE_ID/runs?api-version=6.0-preview.1\u0026#34; echo $URL Next, I specify 3 variables, for which the values (the $() notation) refer to Classic Release Pipeline Variables (These are passed on from the Blazor Web App to the Classic Pipeline). 1 2 3 MTTAliasValue=$(MTTAlias) ServiceConnectionValue=$(ServiceConnection) LocationVarValue=$(LocationVar) Next, I specify the actual curl command to use, where I use the System.AccessToken Variable to allow OAuth authentication by the system account, followed by specifying we\u0026rsquo;re sending a JSON header, and the actual JSON snippet which holds the data to pass along to the YAML pipeline - finding the correct syntax to capture the variable values from earlier, was the biggest challenge here, taking a few failed attempts when triggering the pipeline :)\u0026hellip; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 curl -s --request POST \\ -u \u0026#34;:$(System.AccessToken)\u0026#34; \\ --header \u0026#34;Content-Type: application/json\u0026#34; \\ --data \u0026#39;{ \u0026#34;resources\u0026#34;: { \u0026#34;repositories\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;refName\u0026#34;: \u0026#34;refs/heads/main\u0026#34; } } }, \u0026#34;templateParameters\u0026#34;: { \u0026#34;MTTAlias\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${MTTAliasValue}\u0026#34;\u0026#39;\u0026#34;, \u0026#34;azureSubscription\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${ServiceConnectionValue}\u0026#34;\u0026#39;\u0026#34;, \u0026#34;Location\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${LocationVarValue}\u0026#34;\u0026#39;\u0026#34; } }\u0026#39; \\ $URL The full #Bash script looks as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 PIPELINE_ID=\u0026#34;101\u0026#34; URL=\u0026#34;$(SYSTEM.TEAMFOUNDATIONCOLLECTIONURI)$(SYSTEM.TEAMPROJECTID)/_apis/pipelines/$PIPELINE_ID/runs?api-version=6.0-preview.1\u0026#34; echo $URL MTTAliasValue=$(MTTAlias) ServiceConnectionValue=$(ServiceConnection) LocationVarValue=$(LocationVar) curl -s --request POST \\ -u \u0026#34;:$(System.AccessToken)\u0026#34; \\ --header \u0026#34;Content-Type: application/json\u0026#34; \\ --data \u0026#39;{ \u0026#34;resources\u0026#34;: { \u0026#34;repositories\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;refName\u0026#34;: \u0026#34;refs/heads/main\u0026#34; } } }, \u0026#34;templateParameters\u0026#34;: { \u0026#34;MTTAlias\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${MTTAliasValue}\u0026#34;\u0026#39;\u0026#34;, \u0026#34;azureSubscription\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${ServiceConnectionValue}\u0026#34;\u0026#39;\u0026#34;, \u0026#34;Location\u0026#34;: \u0026#34;\u0026#39;\u0026#34;${LocationVarValue}\u0026#34;\u0026#39;\u0026#34; } }\u0026#39; \\ $URL Specify the Correct Permissions on the YAML Pipeline Remember we used the System.AccessToken variable, which refers to the Build-In DevOps System Process. In order to make this Classic Release Pipeline trigger work, the Build-In account must have Queue Build permissions on the YAML Pipeline.\nNavigate to the YAML Pipeline Click the elipsis (the 3 dots) at the end of the Pipeline line, and from the context menu, select Manage Security Select the Build Service Group, and set Allow for the Queue Builds permission Running the Pipeline Trigger the Classic Release Pipeline, wait for it to complete With the Classic Release completed, navigate to the YAML Pipeline, and see this one is getting triggered / already running Summary In this post, I walked you through a use case within Azure DevOps, where it might be useful to build an integration between both Pipeline worlds, Classic Releases and YAML Pipeline Releases. By using a #Bash script with Curl to call the YAML Pipeline REST API endpoint, as well as passing some parameters in a JSON structure, it is possible to trigger YAML Pipelines from a Classic Release Pipeline.\nI am using this scenario to give myself some time to continue updating the development work on the Blazor Front-end, pointing to YAML Pipelines only at some point, but for now, it gives me the flexibility to keep the same Classic URL Endpoints for my REST APIs, why gradually setting up new YAML Pipelines, migrating Classic to YAML etc\u0026hellip;\nCheers!!\n/Peter\n","date":"2023-09-30T00:00:00Z","permalink":"/post/trigger-yaml-pipeline-from-classic-release-in-ado/","title":"Trigger a YAML Pipeline from a Classic Release Pipeline in Azure DevOps"},{"content":"Disclaimer: this was supposed to be a recorded session for Azure Back To School 2023, but due to a too-busy-work-and-family-schedule over the last 2 weeks, I didn\u0026rsquo;t find the time anymore. While I don\u0026rsquo;t like to let down this amazing community, I hope the textual descriptions of what I was going to talk about is still appreciated.\nAchieving Site Reliability Engineering with Azure In today\u0026rsquo;s digitally driven landscape, ensuring the reliability of cloud-based applications and services is paramount. Site Reliability Engineering (SRE) has emerged as a crucial discipline for achieving this goal. In this technical article, we will explore how Azure, Microsoft\u0026rsquo;s cloud platform, can be leveraged to implement and enhance Site Reliability Engineering practices. We\u0026rsquo;ll delve into key topics such as the Azure Well Architected Framework, Azure Service Level Agreements (SLAs), best practices around DevOps, and the powerful toolset of Chaos Engineering, including Azure Chaos Studio.\nIntroduction to Site Reliability Engineering Site Reliability Engineering (SRE) is a discipline that originated at Google to blend software engineering with IT operations. It focuses on creating scalable and highly reliable software systems. At its core, SRE aims to strike a balance between the need to innovate rapidly and the need to maintain system reliability.\nThe 3 words in the definition can be explained as follows:\nReliability: Means Guaranteeing that any running application is available according the business requirements\nEngineering: Refers at Applying the principles of computer science and engineering to build and maintain systems and applications, from development to monitoring\nSite: This initially referred to THE SITE, yes, http://www.google.com, to guarantee it would be available globally, all the time, no matter what. (An interesting side-story I picked up from talking to one of the Google SRE founders, is that they actually found out that the site was becoming even more important than Google initially planned for - whenever anyone was connecting to a public wifi in a hotel, train station, coffee shop or similar, one of the first, if not the first site they would try to connect to was\u0026hellip; yes, Google)\nAs an SRE team, you guarantee workload reliability; this could range from designing, to operating and any process in between, to make systems more scalable, reliable and efficient\nIn meantime, the site could be broadened to Services, as SREâ€™s are typically managing large-scaled datacenters on global scale\nDrilling down on the role of an SRE would take me like a day or 2, but easy said, it can be 3 different core responsibilities:\nWearing the developer hat, writing software for large scale workloads Sometimes, you take responsibility for the side-pieces like backup, monitoring, load-balancing and alike, being the operations engineer if you want And sometimes, itâ€™s figuring out how t apply existing solutions to new problems Key principles of SRE include:\nService Level Objectives (SLOs): Defining and measuring the desired level of reliability for a service using SLOs. SLOs are a critical aspect of SRE, as they provide a clear target for system performance.\nError Budgets: SRE teams work within an error budget, which is the allowed downtime or errors a service can experience without violating its SLO. This concept encourages a focus on both reliability and innovation.\nAutomation: Automation is essential to SRE. By automating operational tasks and using Infrastructure as Code (IaC) principles, SREs reduce the risk of human error and increase efficiency.\nIncident Response: SREs prioritize incident response and post-incident reviews to learn from failures and continually improve system reliability.\nWhich leads me to the next question, is SRE like DevOps 2.0? In short, DevOps is mainly focused on integrating a culture of brining developer and operations teams together, to collaborate closer with each other, to deliver value to the business\nSRE relies on DevOps prpctices to optimize reliability â€“ Iâ€™ll walk you through some demos on that later in this session with Azure DevOps; I always say there wouldnâ€™t be SRE without DevOps; but SRE is not replacing DevOps, but is rather augmenting it.\nHow to measure SRE success? So now you know what SRE is, the next big question from your customers is probably, how can we measure its success?\nWhile not 100% complete because of time constraints, it boils down to 8 different terms that are crucial to know about:\nSLO â€“ service level objective, which refers to the target reliability for a given workload SLI â€“ works together with SLO; this identifies what is the right percentage of the reliability target RTO â€“ Recovery Time Objective â€“ refers to the amount of time it takes to fix an issue and got a workload up-and-running again RPO â€“ Recovery Point Objective, which defines the point to where you could recover, meaning what is the foreseeable data loss Specifically for measuring downtime and outages, the following 4 are important:\nMTTF â€“ Mean Time To failure â€“ how long does it take before something breaks MTBF â€“ Mean Time Between failure â€“ how long before the next outage occurs MTTR â€“ Mean Time To Repair â€“ how long to fix it MTTA â€“ Mean Time To Acknowledge â€“ how long to detect an outage Azure Well Architected Framework If you would look down from Mars onto an Azure datacenter, it would look like this, having 3 core layers:\nAzure Foundation, this is what I call the fabric, typically the layer that the customers canâ€™t really touch, although it gives some configuration options\nAzure Cloud Services, is where the customer deploys workload infrastructure; this could be IAAS, PAAS, Containers and Serverless microservices\nOn top, I position Apps, like the actual workloads that are running\nImportant to emphasize is that using Azure is a shared responsibility; out of Microsoft, we need to provide a reliable Azure Foundation, think of the physical datacenters, allowing customers to build their level of reliability and resilience on the Cloud Services layer â€“ think of Availability Sets and Zones for VMs, Global Load Balancers to redirect web traffic across Azure Regions or multi-region storage and database replication; which results in reliable App Runtimes\nSo how do you get a view on Azure Foundation reliability?\nRight, checking Azure Service Health; this is actually a combination of 3 different tools in one:\nAzure Status: overall Azure Status Health\nService Health: Personalized view of your Azure Services and Regions in use\nResource Health: Shares status of your individual cloud resources e.g. VM, App Service\nThe Azure Well Architected Framework is a comprehensive guide provided by Microsoft to help architects build secure, high-performing, resilient, and efficient infrastructure for their applications. It aligns closely with SRE principles, as it emphasizes best practices for reliability and scalability.\nHere are some key elements of the Azure Well Architected Framework that contribute to SRE:\nReliability Pillar: This pillar of the framework specifically addresses the principles of SRE. It covers topics like fault tolerance, disaster recovery, and monitoring. Architects can use this guidance to design systems that meet their reliability SLOs.\nOperational Excellence Pillar: SREs focus on automation and efficient operations. Azure\u0026rsquo;s Operational Excellence Pillar provides guidance on automating tasks, reducing manual intervention, and improving operational efficiency.\nPerformance Efficiency Pillar: Meeting SLOs often requires optimizing performance. This pillar offers insights into selecting the right Azure resources and configurations to achieve optimal performance for your workloads.\nCost Optimization Pillar: Managing costs is essential in SRE. Azure provides tools and best practices for cost management and optimization, helping teams stay within their error budgets.\nApart from these, I summarize a few other best practices if you want:\nAzure Service Level Agreements Service Level Agreements (SLAs) are a crucial aspect of SRE, as they define the expected reliability and availability of Azure services. Understanding Azure SLAs is essential for architects and SREs to design and operate reliable systems.\nKey points about Azure SLAs:\nAvailability Guarantees: Azure SLAs typically guarantee high availability for services, such as Virtual Machines (VMs), Azure SQL Database, and Azure App Service. These SLAs specify the percentage of time a service is expected to be available.\nService Credits: Azure offers service credits if SLAs are not met. This financial compensation is part of Azure\u0026rsquo;s commitment to providing reliable services.\nMulti-Region Deployments: To enhance reliability, architects can design their applications to span multiple Azure regions. This ensures redundancy and reduces the risk of downtime.\nMonitoring and Alerting: Implementing effective monitoring and alerting systems is crucial to detect and respond to SLA violations promptly.\nBest Practices around DevOps in Regards to Azure Reliability DevOps practices play a pivotal role in achieving SRE goals. Integrating DevOps and SRE principles can lead to a culture of continuous improvement and reliability. Here are some best practices:\nInfrastructure as Code (IaC): Embrace IaC to automate the provisioning and configuration of Azure resources. Tools like Azure Resource Manager (ARM) templates and Terraform facilitate the management of infrastructure as code.\nContinuous Integration and Continuous Deployment (CI/CD): Implement CI/CD pipelines to automate software deployments. Azure DevOps Services, GitHub Actions, and Jenkins are popular tools for building robust CI/CD pipelines on Azure.\nMonitoring and Observability: Utilize Azure Monitor, Application Insights, and Log Analytics to gain real-time visibility into your applications and infrastructure. This enables proactive issue detection and resolution.\nAutomated Testing: Implement automated testing practices, including unit tests, integration tests, and end-to-end tests. Azure DevTest Labs can help create test environments easily.\nContainerization and Orchestration: Container technologies like Docker and Kubernetes can enhance application reliability and scalability. Azure Kubernetes Service (AKS) simplifies the management of Kubernetes clusters.\nIncident Management: Define clear incident response procedures and automate incident detection and resolution where possible. Azure Service Health and Azure Logic Apps can be valuable here.\nChaos Engineering and Azure Chaos Studio Chaos Engineering is a practice that involves deliberately injecting failures and faults into a system to test its resilience. Azure offers a powerful toolset, including Azure Chaos Studio, to help organizations practice Chaos Engineering and improve the reliability of their Azure-based applications.\nKey components of Azure Chaos Studio:\nExperimentation: Azure Chaos Studio allows you to create controlled experiments that simulate various failure scenarios, such as network disruptions, high CPU usage, or database outages.\nObservability: Gain insights into how your system behaves under stress by collecting and analyzing telemetry data during chaos experiments. This data helps identify weaknesses and areas for improvement.\nAutomation: Automate the execution of chaos experiments to ensure consistency and repeatability. This is especially valuable for ongoing testing and validation of your system\u0026rsquo;s reliability.\nIntegration with Azure Services: Azure Chaos Studio integrates seamlessly with Azure services, making it easy to test the resilience of Azure-based applications and services.\n(For more details on Chaos Engineering and Azure Chaos Studio, read my recent blog post on the subject)\nConclusion Achieving Site Reliability Engineering with Azure involves a combination of best practices, tools, and a strong focus on reliability. By following the Azure Well Architected Framework, understanding Azure SLAs, implementing DevOps best practices, and experimenting with Chaos Engineering using Azure Chaos Studio, organizations can build highly reliable and resilient systems on Microsoft\u0026rsquo;s cloud platform.\nAs Azure continues to evolve, it offers an ever-expanding set of tools and services that align with SRE principles. By staying informed about the latest Azure offerings and incorporating them into your SRE practices, you can ensure that your applications and services meet their reliability objectives in the dynamic world of cloud computing.\nThanks to the amazing Azure Back To School Team for having me for another year, and continuously supporting the Azure communities.\nCheers!!\n/Peter\n","date":"2023-09-09T00:00:00Z","permalink":"/post/achieving-sre-on-azure/","title":"Azure Back to School - Achieving SRE on Azure"},{"content":"Hey folks,\nFor the ones who have been following me here for a while, know I\u0026rsquo;m passionate about Azure and DevOps (and yes, also Azure DevOps, lol). But to be honest, I\u0026rsquo;ve been eyeing to another Azure Service for a while, Azure Chaos Studio, available in public preview.\nThis article shares an introduction to Chaos Engineering, as well as walks you through the first steps it takes to set up Azure Chaos Studio, create experiments and validate the outcome.\nLet\u0026rsquo;s have a look.\nIntroduction to Chaos Engineering Chaos Engineering is:\nthe discipline of experimenting on a system in order to build confidence in the systemâ€™s capability to withstand turbulent conditions in production.## Source: https://principlesofchaos.org/\nChaos Engineering is all about experimenting â€“ typically against production-running systemsâ€“ to identify and find loopholes, pitfalls if you want, in the way the system is running, which makes the system less reliable. The more loopholes we can identify upfront, the more confidence we can have in the systemâ€™s reliability. By introducing a series of event simulations, whether based on real incidents happened earlier or based on simulated outages that could happen, we target our workloads and learn from its impact.\nThe curious case of CPU Pressure - Part I Let\u0026rsquo;s take CPU pressure as an easy use case.\nImagine a workload is running fine for months, with an average CPU load thatâ€™s keeping the system running healthy. Suddenly, a CPU spike occurs and crashes the application. Apart from troubleshooting the root cause of the CPU spike, probably a task for engineering or development teams, it might be equally relevant to find out why the system reacted with a crash of the application. Even more so, if we could have simulated a CPU spike happening, our engineering and development teams could have focused on mitigating the problem by releasing a fix, updating the architecture to an even more fault-tolerant setup.\nDonâ€™t get me wrong though, as Chaos Engineering is a lot more than injecting outage triggers (faults) to bring production environments to their knees. Thereâ€™s a lot more complexity involved, especially since an outage is typically not caused by one single failure, but more of a series of incidents.\nThe curious case of CPU Pressure - Part II Reusing the CPU pressure example, one could consider a scenario where CPU is spiking, because of a latency in database operations, putting a calculation or database update on hold. Or maybe there is a network connectivity issue, by which an operation cannot be written to the database back end, causing so many retry operations, which spikes CPU. So instead of just â€œsimulatingâ€ the CPU spike, it is also important to capture all possible side effects that could cause a CPU pressure.\nWhich â€“ to me â€“ also explains why itâ€™s called an engineering discipline, as there is quite some engineering involved in all the interactions across different systems, components, and workloads.\nNow, you might think that Chaos Engineering is the next big thing (maybe even coming after SRE and DevOps?), but yet too revolutionary for your cloud environments. But nothing is more wrong.\nNetflix\u0026rsquo;s Chaos Monkey In fact, Chaos Engineering has been around for more than ten years already, initiated by software engineers from Netflix around 2008, when they started migrating from on-premises data centers to public cloud data centers. While there are a lot of similarities between managing your own data center and using public cloud, there are also big differences. It was mainly those differences that forced Netflixâ€™s engineers to create services architectures with higher resiliency.\nGoing through the testing related to this cloud migration resulted in the creation of an internally developed Chaos Orchestration tool around 2010, branded Chaos Monkey, which was publicized as an open source product in 2012. More information on the tool and how to use it is available on GitHub.\nNetflix designed Chaos Monkey to allow them to validate the stability of their production-running workloads (the Streaming Service we all use), which was running on Amazon Web Services (EC2 VM Instances). The main purpose of Chaos Monkey was detecting how their systems would respond to critical components being taken down. By intentionally shutting down workloads, it would become clear what weaknesses are present in the total topology and allow the engineering teams to work toward mitigation.\nIntroduction to Azure Chaos Studio Azure Chaos Studio is provided as a service, which means you donâ€™t have to deploy your own infrastructure first to get it up and running. Azure Chaos Studio Preview is a fully managed chaos engineering experimentation platform for accelerating discovery of hard-to-find problems, from late-stage development through production. Disrupt your apps and its corresponding Azure resources (Virtual Machines, Network Security Groups, App Services, Cosmos DB, Azure Kubernetes Service, Azure KeyVault and more\u0026hellip;) intentionally to identify gaps and plan mitigations before your users or customers are impacted by a problem.\nExperiment by subjecting your Azure apps to real or simulated faults in a controlled manner to better understand application resiliency. Observe how your apps will respond to real-world disruptions such as network latency, an unexpected storage outage, expiring secrets, or even a full datacenter outage.\nThanks to Azure Chaos Studio, one can validate product quality where and when it makes sense for your organization. Use the continuously expanding library of faults, which includes CPU pressure, network latency, blocked resource access, and even infrastructure outages. Drive application resilience by performing ad-hoc drills, integrate with your CI/CD pipeline, or do both to monitor production quality through continuous validation.\nAvoid the need to manage tools and scripts while spending more time learning about your application\u0026rsquo;s resilience. Get started quickly with experiment templates and an expanding library of faultsâ€”including agent-based faults that disrupt within resources and service-based faults that disrupt resources at the control plane.\nDeploying Azure Chaos Studio The first thing you need to check is to make sure the â€œMicrosoft.Chaosâ€ Azure Resource Provider is enabled (registered) in your subscription. To do that, open your Azure portal, and search for Subscriptions.\nSelect the subscription in which you want to enable Azure Chaos Studio. From within the detailed blade, select â€œResource Providersâ€ under the Settings pane, and search for â€œChaosâ€. Select â€œMicrosoft.Chaosâ€ and click â€œRegisterâ€ in the top menu; give it a few minutes, until the Status column shows â€œRegisteredâ€.\nFollowing the Zero Trust concept of least privileges, Chaos Studio requires a User-Managed Identity as Identity security object, to interact with the Azure Resource Targets. To create this, click Create New Resource, and search for User Managed Identity.\nComplete the necessary parameters in the setup blade:\nAzure Subscription Azure Resource Group Region Name for the Chaos MI object Chaos Studio relies on Application Insights and underlying Log Analytics WorkSpace to store metadata of the service (my assumption is, that in a later stage, this will be used to store the actual logging of the executed experiments and Target resources\u0026rsquo; behavior). From the Azure Portal, select Create New Resource and search for Application Insights. Specify the necessary parameters for the deployment:\nAzure Subscription Azure Resource Group Name for the Application Insights resource, e.g. ChaosAppInsights Resource Mode: Workspace-based From the Azure Portal, search for Azure Chaos Studio. Select Onboard Resources. This brings you to the Targets section of the blade. Here, you can filter for specific subscriptions or specific Resource Groups (or both), where next, you need to select the Azure Resource(s) you want to use as a target. A target can be a Virtual Machine, both Windows and Linux OS are supported, as well as Virtual Machine Scale Sets (VMSS). This approach requires the installation of the Chaos Studio Agent on the VMs as part of the Target setup. Non-Azure VM services such as App Services, Azure Key Vault, Network Security Groups, Cosmos DB and Azure Kubernetes Service (AKS) rely on the Service-Direct scenario, without the agent dependency.\nOnboarding an Azure VM to Chaos Studio In this example, Iâ€™m going to target a Windows Server Virtual machine, selecting it, which unlocks the â€œEnable Targetsâ€ menu option. From here, select Enable agent-based targets (VM, VMSS) from the menu. This will open the Enable agent targets blade. Provide the necessary parameters for the deployment: Subscription Azure Managed Identity you created in the previous steps Application Insights account you created for Chaos Studio Wait for the deployment to complete. Onboarding a non-Azure VM to Chaos Studio In this next example, I\u0026rsquo;m going to target several other Azure Resources as Chaos Target. Starting with a Network Security Group. This opens the Enable service direct targets blade. Click Review and Enable to complete the step.\nIf you want, you can add additional non-Azure VM Resources in this scenario. For example, I included my Azure Kubernetes Service (AKS) resource, as well as Azure Key Vault.\nCreating the VM Target Experiment In this next section, you will learn how to create an Azure VM Experiment and simulating CPU Pressure.\nWith the VM Chaos Agent installed, it can now be used as a proper target for a Chaos Experiment. From the Chaos Studio Blade, select Experiments, select New Experiment. Under the Basics tab, complete the necessary base information: Subscription Resource Group Name: Unique descriptive name for the experiment, e.g. CPUSpike Region of choice Click Next to define the Permissions. Here, you allocate a new Managed Identity Identity security object to the experiment itself. Later on, you need to provide the necessary IAM/RBAC (Role Based Access Control) permissions for this Experiment to the Azure Target Resource. Click Next to open the Experiment Designer The logical structure is based on Steps, containing Branches. Within a Branch, you specify the actual Fault Action or Delay.\nSteps are running in parallel, where Actions are executed Sequential.\nClick + Action, and select Add Fault. This opens a list of possible fault injections for all resources.\nFrom the list of faults, select CPU Pressure. Next, specify the Duration parameter (10 minutes seems a fair test) and set pressureLevel to 95, meaning a CPU Pressure of 95% during 10 minutes. Next, allocate the Azure Target Resource. In our example, select the VM you deployed earlier as sample target. A new Experiment Resource got created. Before the experiment can run successful, its corresponding Managed Identity needs to have the correct RBAC/IAM permissions on the target resource. Navigate to the Azure VM from the Azure Portal, and select Access Control/IAM. Different Azure Target Resources require different Chaos Experiment RBAC permissions. For an Azure VM, Reader permissions is sufficient. Note: for an overview of what RBAC permission is required for each Azure Resource, check this link in the Azure Docs.\nFrom Add Role Assignment, select Reader as Selected Role. In the Assign Access to step, select Managed Identity and open the Chaos Experiment section under the Managed Identity section. Select the CPUSpike Experiment Resource.\nWith the permissions allocated, navigate back to the CPUSpike Experiment in the Chaos Studio blade, and click Run. The Experiment Task will start, and run for 10 minutes.\nWait for the Experiment to change to Running state. With the experiment running, navigate to the Azure VM your testing against, and open its Metrics. From the Metrics blade, select Percent CPU under Metric in the graph, and watch the real-time CPU load. Repeat this process every couple of minutes, and see how the CPU load is gradually spiking, eventually reaching 95% for a certain amount of time. Wait for the CPUSpike Experiment to complete successfully. Summary In this article, you learned about Chaos Engineering, how Netflix created Chaos Monkey as the foundation of Chaos Engineering, and how Azure Chaos Studio allows for Chaos-testing of your Azure Resources.\nFor now, you learned how to deploy Chaos Studio, and how to enable an Azure VM as a Chaos Target, followed by how to create and run a Virtual Machine CPUSpike experiment.\nIn a later follow-up article, I will show you how to use Chaos Studio Experiments against an Azure Kubernetes Cluster, as well as a Network Security Group.\nI hope this article spiced your interest for Azure Chaos Studio. Go out and experiment with it, and let me know how it goes!\nCheers!!\n/Peter\n","date":"2023-08-27T00:00:00Z","permalink":"/post/intro-to-chaos-engineering-and-azure-chaos-studio-preview/","title":"Intro to Chaos Engineering and Azure Chaos Studio (Preview)"},{"content":"You must have lived under a rock if you didn\u0026rsquo;t hear about how important Azure AI is for Microsoft and its partner and customer ecosystem. Thanks to Artificial Intelligence (AI), companies will be more innovative, employees will be more productive. While I honestly was a bit hesitant at first myself - knowing a big part of my job is providing training, and seeing what AI can do here in terms of content creation, video creation and alike, yes the trainer role will definitely (has to) change over the next coming months - once I started digging into its capabilities more, I started to see the AI engine potential.\nTo me, AI capabilities are here to support us, think of it as a facilitator, a coach, someone who\u0026rsquo;s walking the path with you. But you\u0026rsquo;re still in control and deciding which way to go, when and where.\nOver the last 18 months, I\u0026rsquo;ve been developing an app for your internal Microsoft Trainer team, using a combination of Azure DevOps Pipelines, Blazor .NET and Azure Blob Storage for storing the guidelines and documentation, allowing them to quickly deploy Azure demo scenarios using a self-service portal. One of the missing features in the app, was a decent search capability. Allowing trainers to search for Azure Resources keywords or demo scenarios.\nWith Azure AI Studio being available, promising a smooth experience for building AI-integrated solutions, I saw this as the perfect candidate for my missing search feature. What if I could provide a chat bot, allowing trainers to ask natural language-based questions, where the answer would be a summary of demo steps to showcase, or pulling up the full demoguide document? Sounds amazing, right?\nI gotta say, it\u0026rsquo;s amazing. Especially because it took me less than 30 minutes (including making mistakes or missing steps, so technically you can do this in less than 15min now :) )\nWhat you will build In this article, you will learn how to use Azure AI and Azure AI Studio, to deploy a chat bot which connects to Azure Blob Storage and uses your own markdown files as input for providing answers to questions.\nWhat you need In order to follow below steps and succeed in getting your first Azure AI Chat bot up-and-running, you need to meet these prerequisites:\nAn Azure Subscription (you can use the Azure Free Subscription link if you don\u0026rsquo;t have one yet) Access to Azure OpenAI in your Azure Subscription. Complete the request form here. **Note: access to Azure OpenAI requires company details, it doesn\u0026rsquo;t work for private/personal accounts.\nHaving Cognitive Services Contributor or higher permissions in your Azure Subscription.\nAn Azure Blob Storage account with at least 1 container. The sample files will be uploaded as part of the data source selection steps later on.\nWith the prerequisites validated, you are ready to move on with the base setup of Azure AI Chat Playground using the following steps.\nDeploying Azure AI Chat Playground using Azure AI Studio Open Azure AI Studio from your browser, using your Azure admin account credentials. Select your Azure Subscription, and click Create Resource\nThis opens the Create Azure OpenAI Resource blade. Here, complete the necessary parameters:\nAzure Subscription Existing or New Azure Resource Group Azure Region of choice Unique Name for the Azure OpenAI Resource Pricing Tier: Select Standard S0 (The Free Basic won\u0026rsquo;t allow us to use the necessary Cognitive Search later on) Wait for the Azure OpenAI Resource to get deployed, and navigate to the resource once its ready. Navigate back to Azure OpenAI Studio, which will allow you to select your Azure AI Resource created earlier. From Azure OpenAI Studio, select Chat Playground from the Get Started options. Within the Chat Playground, click the Create new deployment button to set up a new Chat Plaground. In the next step, you will create the AI model needed for the Cognitive Search later on. Complete the necessary settings: Model: gpt-35-turbo Model Version: Auto-update-to-default Deployment Name: descriptive name of what the model is about This completes the first part of the steps, where you deployed Azure AI Chat Playground using Azure AI Studio. In the next step, you add the data source which will be used for the chat content.\nAdding your own data sources (Blob Storage) to the Azure AI Chat Playground With the model created, we can move on to the next step, adding data. From the Chat Playground, select Add your data (preview) From the Add Data blade, complete the necessary settings and parameters: Azure Subscription Azure Blob Storage Account Azure Blob Storage Account container Note how it asks for an Azure Cognitive Search resource as well; this is to be able to read the actual content of the blobs, such as Word documents, PDF files, etc\u0026hellip;\nClick the Create a new Azure Cognitive Search Resource link to get this resource created.\nFrom the Create a search service blade, enter the necessary parameters for the resource creation:\nAzure Subscription Azure Resource Group Location (make sure this matches the previous settings for the Azure OpenAI resource) Service Name: unique name for the search service Pricing Tier: Basic (since the free service won\u0026rsquo;t be recognized by Chat Playground) Scale / Replica: 1/1 Once the Search Service got created, navigate back to the Chat Playground, and repeat the steps to add your own data. This time, the Cognitive Search will be recognized as service in the Add Data Source step. Click Next, and upload a few sample files in the Upload Files step. Continue the add data source wizard steps, and completing with clicking Save and Close. From the Chat Playground window, notice how your data is getting added. This process should only take a few minutes. This completes this part of the steps, where you specified the data source to be used by the Chat Playground. In the next and last step, you will use the Chat Session to validate the functionality of the chat bot, and testing the accuracy of the responses.\nUsing Chat Session to validate data content and response From the Chat Playground blade, navigate to Chat session. Enter a basic question in the your message field. Note: in my scenario, I was using demoguides, so wanted to check back if the chat bot could find a demo scenario containing Cosmos DB as resource.\nBased on the question, the chat bot responded with an accurate answer, providing a brief description of the actual demo steps from the guide, as well as a link to the actual source markdown-file in Blob Storage.\nAs I only have 1 single guide with CosmosDB, let\u0026rsquo;s test how it handles the question, if there are more results possible. I asked a somewhat broader question, using retail application as keyword (Note: our demo scenarios involve a retail application as example, which exists in a Virtual Machine architecture, a Platform as a Service, Container Instance, Kubernetes Service and Azure Container Apps architecture). So based on that, the expectation is to get multiple results back. Woohoo!! The Chat bot found the different sources and provided a nice summary overview.\nThis was more than convincing to me how powerful Azure AI is. While the answers might not have been 100% accurate - close to 95% I guess :), know we just deployed the model without any fine-tuning, nor wait a long time to actually build up an accurate index of the blob source content.\nNext, optional though, you can publish the Chat Bot to an Azure App Service. This would allow a developer to integrate the bot in a broader web app scenario using iframe or similar HTML/CSS code. From the Chat Playground blade, navigate to Deploy to in the upper right corner. and select Azure App Service.\nComplete the necessary parameters for the Azure App Service to be created: App Service Name Azure Subscription Azure Resource Group Location Pricing Plan (S0 or S1 would be OK) After waiting about 5 minutes (although the wizard set it could take up to 10min\u0026hellip;), it asked me for my Azure AD credentials to authenticate. (**Note: out of the publishing wizard, a new App Registration and Service Principal gets created, granting only the admin user access. In a real-life scenario, you would need to update the Authentication settings on the App Registration to allow for a broader Identity scope)\nAfter successful authentication, the Chat Bot is ready to be used:\nAs a final test, I wanted to see how the Chat Bot responded if it didn\u0026rsquo;t find the correct answer. (This was partly to rule out the ChatGPT experience I had before, where the AI engine invents its own answers, but still explaining it in such a way it feels as the correct answer) As you can see, Azure AI handles it a bit more \u0026lsquo;honest\u0026rsquo;, and admitting it couldn\u0026rsquo;t help providing an accurate answer.\nSummary Azure AI is an amazing cloud service, with unseen capabilities. With this article, I wanted to inform you as a reader on how easy it can be to set up Azure AI Cognitive Search, and using it with a Chat Bot functionality, based on your own data from Azure Blob Storage.\nI\u0026rsquo;m confident Azure AI will become a big part of our day-to-day skillset. So expect more similar blog posts in the near future how I\u0026rsquo;m continuing my journey of learning about AI and get ready for the future.\nCheers!!\n/Peter\n","date":"2023-07-17T00:00:00Z","permalink":"/post/build-an-azure-ai-chatbot-using-your-own-data-in-blob-storage/","title":"Build an Azure AI chatbot using your own data from blob storage "},{"content":"Earlier this week, I got an interesting phenomenon when publishing a code update to an existing Azure App Service. As this was a small project, I\u0026rsquo;ve always been deploying updates in a manual way from right-click / publish in Visual Studio. But than I said to myself, Peter, as a passionate DevOps engineer and trainer, just go out and create a pipeline for this.\nAnd that\u0026rsquo;s what happened.\nRunning the release pipeline came back successful, but the site threw an error:\nI was like well, OK, no worries, let\u0026rsquo;s go back to the manual deployment from Visual Studio for now. Only to find out that one didn\u0026rsquo;t succeed either anymore, just giving me a spinning publishing task.\nSo this was not really helping me further. Time to open up the App Service Logs settings on the App Service to dig in.\nAfter which I could check the live logs using App Service Log Stream (notice the verbose option to get immediate and full feedback\u0026hellip;)\n2 things here got my attention:\nHTTP Error 403.13 Forbidden A default document is not configured for the requested URL So it seems like the index.html I use in my app, couldn\u0026rsquo;t be found on the Web Server. Let\u0026rsquo;s validate with App Service Editor\nInteresting\u0026hellip; so I have my drop folder with the application zip package and some other deployment artifacts, but not the actual application files in an extracted format. The drop folder was also something that made me curious\u0026hellip; As that is what Azure DevOps is using to publish the package\u0026hellip; Let\u0026rsquo;s go back to my Azure DevOps Release pipeline and check something\u0026hellip;\nEUREKA!!! Looks like Peter made a mistake here, by not setting the package file to use. So what the ADO Pipeline does here, is just copying the /drop folder with the artifact into the Azure App Service, but not extracting it.\nThis is what this setting should look like:\nWith these new changes, let\u0026rsquo;s run the release deployment again and see what happens\u0026hellip;\nand the website is running as expected!\nLast check, validating if a new deployment from Visual Studio is running as expected again\u0026hellip; and that runs successful again too!\nSummary In this post, I wanted to help you sharing some troubleshooting steps for Azure App Services. And also admitting I made a minor mistake in my ADO pipeline setup. So additional lesson learned: always doublecheck your setting when something doesn\u0026rsquo;t work anymore as it should be :)\nCheers!!\n/Peter\n","date":"2023-04-30T00:00:00Z","permalink":"/post/you-do-not-have-permissions-error-after-publishing-to-azure-app-services/","title":"You do not have permissions to view this directory or page after publishing to Azure App Service"},{"content":"Hey folks,\nI\u0026rsquo;m a fond user of Azure DevOps for testing application builds, running CI/CD pipelines to publish Azure demo scenarios and train our Microsoft global customers on it every few weeks out of AZ-400 training deliveries.\nOne of the lesser-known, yet AWESOMELY POWERFUL features besides \u0026lsquo;running pipelines\u0026rsquo; is Azure Boards, providing an end-to-end project methodology platform using Scrum, Agile, CMMI or custom approach.\nIn this post, I mainly wanted to zoom in on the custom capabilities, capturing feedback from employees - which got entered through an Office Forms form, picked up by Azure Logic Apps, and stored in a customized Azure Boards Work Item.\nIn short, the following steps are needed:\nCreate custom Office Forms with questions and fields to complete Create custom Azure DevOps Process Methodology containing the custom Work Item fields and form layout Create new Azure DevOps Project, linked to the custom Project Methodology Create Azure Logic Apps flow, mapping each custom field from Office Forms to the custom Work Item fields See it in action :) Here we go\u0026hellip;\nCreate custom Office Forms with questions and fields to complete The first step involves creating a custom Office Form, which is probably one of the easiest parts in the process. This is a free service within Microsoft Office Online, typically used for collecting user input, such as surveys, quizzes and polls, and all you need is a Microsoft Account such as Outlook.com, Hotmail.com or an organizational Office 365 account.\nBrowse to https://forms.office.com, and select New Form Next, specify the different questions, together with the answer type (e.g. multiple choice, text field,\u0026hellip;) I won\u0026rsquo;t cover the details on how to do this, as I think it\u0026rsquo;s self-explanatory. A sample form I\u0026rsquo;ll be using in this post looks like this: With the Office Form ready, we can move on to the next step, creating a new custom ADO Process Methodology.\nCreate custom Azure DevOps Process Methodology Azure DevOps provides several Process Methodologies, such as Scrum, Agile, Basic, but also allows you to create customized versions of those. (More info on each process and how to choose is documented on Microsoft Learn).\nLog on to Azure DevOps with an Organizational admin account Select Azure DevOps (the logo in the upper left corner), and select Organization Settings Within the Settings menu, select Process under the Boards section This shows a list of default Azure Boards processes (Basic, Agile, Scrum, CMMI) In this example, we will build a new custom deviation from the Scrum process, but you can choose any you want. Hover the mouse over the Scrum process, and select the ellipsis (the 3 dots). From the context menu, select Create inherited process Provide a name and (optional) description for the process. I called mine Forms Test Process Once created, select the new process. This opens a list of Work Item types such as Bug, Epic, Task and other. As the new Work Item we create is so custom, it doesn\u0026rsquo;t really matter which one to choose. If you have about 50% or more that\u0026rsquo;s identical to an existing Work Item, you can use that as a baseline. Click New Work Item Type Provide a Name, Description, Icon, Icon Color of choice. Confirm by pressing the Create button. Once created, select the new work item type. This opens the Layout editor, where we will add custom fields, reflecting the different questions/items from the Office Form earlier. A Work Item is based on Tabs. In my example, I only use a single tab, called Details. Know you can as many Tabs as needed. Within each Tab, the Work Item layout is built-up of 3 panes, a left, holding a Description field, the middle pane, holding Custom fields, and the right pane, which has Deployment, Development and Related Work as default items - at least in my setup. Add the custom fields you want to have on the Work Item, specifying the field type (e.g. I added an open question, set as Text Multiple Lines, as well as adding the Geography and Category as Text Single Line items). These fields somewhat correspond with the different items on the Office Form. Although there are a lot more customizations possible, I hope these basic steps are helping you building the baseline for what I want to guide you through in this post.\nWe now have the Azure DevOps Process created, as well as the customized version of the Work Item we want to use. Let\u0026rsquo;s hook this up to a new Azure DevOps Project.\nCreate new Azure DevOps Project, linked to the custom Process Click the Azure DevOps logo (upper left corner), and press the + New Project button. Provide a Project Name, (optional) Description, Visibility. Next, click Advanced to specify the Work Item Process. Click the Work Item Process field and select the custom process created earlier. Once the project got created, navigate to Boards. Here, click New Work Item, notice the new Work Item type is available. Select the new Work Item, which shows the detailed view. Notice the custom fields we added earlier are nicely showing up here. The flexibility we now have, is that the Work Item can be created from both the Office Form, as well as still being available from within Azure DevOps. (Note: while we added custom fields, I didn\u0026rsquo;t add any field content to choose from, such as EMEA,APAC,USA in the Geography field - which would a viable option). In my use case, the only way to create a new Work Item is through the Office Form, as no typical users got access to Azure DevOps to do that (Permissions :)).\nAwesome, we are now about 3/4 through the process, with the remaining part being the creation of the flow, using Azure Logic Apps.\nCreate an Azure Logic App flow to capture Office Forms data to ADO Work Item Log on to Azure with administrative permissions to create an Azure Logic App resource. When creating the Logic App, specify a unique name, Resource Group, Location and Plan (consumption would be OK). Once the resource got created, it automatically opens Logic App Designer, which allows for the setup of the actual flow. From the list of sample scenarios, select Blank Logic App. In the Search connectors and triggers field, search for Microsoft Forms. Next, select When a new response is submitted as trigger. In the Form Id field, select the Office Form name you used earlier. Click + New Step to add the next step in the Logic App flow. In the Search connectors and triggers field, search for Azure DevOps. From the list of Actions, select Create a new work item. Complete the fields, selecting your DevOps Organization, the DevOps Project and the Work Item Type as created earlier. Now, we map the custom fields, by selecting Add new parameter, and selecting Other Fields. Within the little table of Other Fields (the key/value), select the Key object; this opens the Logic Apps Dynamic Content. Here, click the See more option This is where Logic Apps is awesome. It allows you to select (all) previous fields from all previous steps in the flow process. Notice how the Forms information is returned as Body, which is not what we need. We want to reach out each answer to each Form\u0026rsquo;s question, instead of the full body. To make this possible, we have to add another step in-between the Forms step and the Azure DevOps step. Click on the + sign in-between both steps, and select Add New Action. Search for Microsoft Forms again. This time, it will show an action called Get Response Details. In the Forms Id field, select the name of the Office Forms; In the Response Id field, click See More and select List of Response notifcations Response id from the Dynamic Content list of options. With that step added, return to the Create a work item step in the Logic App Flow, and navigate to the Other Fields section in the parameters. Select the Enter Key field, which opens the Dynamic Content blade again. This time, notice how the different response details (the Form\u0026rsquo;s questions) are visible. From here, the idea is that you \u0026lsquo;map\u0026rsquo; each custom field object from the ADO Work Item, with a corresponding value from the Office Forms. For example, the work item \u0026ldquo;geography\u0026rdquo; created earlier, maps with what is your geography question I have on the form. Once done with all field mappings, Save the Logic App. This completes the configuration of the Azure Logic Apps (Note: there is more work needed if you have more fields\u0026hellip;)\nWhich brings us to the last step\u0026hellip; seeing it in action\u0026hellip;\nTesting the Azure DevOps Work Item creation Return to the Office Form, and click Collect Responses. Complete the different questions and fields on the Form. Wait for about a minute, and return to the Azure Logic Apps flow created earlier. From the Overview blade, navigate to Runs History. The Forms completion resulted in a successful workflow trigger. Select the line, which opens a more detailed view. . Last, return to the Azure DevOps Project, navigate to Boards and open Work Items. Notice the newly created Work Item. Open the item, to see how the custom fields got completed. That\u0026rsquo;s pretty much it!! Nice isn\u0026rsquo;t it\u0026hellip;\nSummary In this post, I wanted to share more details on how you can allow end-users (or customers) to create Azure DevOps Work Items (of pretty much any type with any custom fields), using an integration of Microsoft Forms and Azure Logic Apps.\nDon\u0026rsquo;t hesitate reaching out if you have any additional questions on this, or if you want to share how you used this in your own scenarios.\nCheers!!\n/Peter\n","date":"2023-03-26T00:00:00Z","permalink":"/post/collecting-feedback-in-ado-work-items-from-office-forms/","title":"Collecting Feedback in ADO work items from Office Forms"},{"content":"\nHey friends,\nWelcome to #AzureSpringClean, an initiative from Joe Carlyle and Thomas Thornton where I\u0026rsquo;m honored to be able to participate in again for the 4th year already. Thanks guys for trusting me once more for sharing some Azure knowledge\u0026hellip;\nThis time, I wanted to try and guide you through the buzzword of the last couple of years, containers\u0026hellip; and more specifically, what different options you have in Azure to run your containerized workloads.\nContainerization has become a popular way to deploy applications in the cloud, offering benefits such as scalability, portability, and reliability. Azure, Microsoft\u0026rsquo;s cloud platform, offers several services that allow running containerized workloads, each with its own strengths and limitations. In this article, we will explore the different Azure services for container orchestration and management, including Azure Container Instance, Azure Kubernetes Services, Azure App Services, and Azure Container Apps.\nThe starting point of a containerized workload, is having, or building a Docker container image.\nDocker \u0026amp; Docker Desktop Docker is a popular platform for building, shipping, and running containerized applications. It provides a consistent environment for developers and operators to develop and deploy applications across different platforms and infrastructures. Docker makes it easy to package applications and their dependencies into portable container images, which can be run on any machine that supports Docker.\nDocker containers are lightweight, standalone, and executable packages of software that include everything needed to run an application. They contain the application code, runtime, system tools, libraries, and settings, making them highly portable and efficient. Docker containers run in isolation from the host operating system, providing consistent behavior and preventing conflicts with other applications.\nDocker Desktop is a desktop application for Windows and macOS that provides a complete development environment for building and testing Docker applications. It includes the Docker Engine, Docker CLI, and a GUI-based interface for managing containers, images, and networks. With Docker Desktop, developers can easily build, test, and run Docker applications on their local machines, without having to set up a separate environment.\nDocker Desktop provides a simple and intuitive user interface for managing Docker images and containers. It allows developers to create, edit, and run Docker containers with just a few clicks. Developers can also use Docker Desktop to deploy applications to remote Docker hosts, such as cloud-based container orchestration platforms like Kubernetes.\nOne of the major benefits of Docker Desktop is its ability to provide a consistent development environment across different platforms and operating systems. It eliminates the need for developers to set up and maintain complex development environments on their own machines, which can be time-consuming and error-prone. Docker Desktop also supports popular programming languages and frameworks, such as Java, Node.js, Python, and Ruby, making it a versatile tool for building modern applications.\nOverall, Docker and Docker Desktop provide a powerful platform for building, shipping, and running containerized applications. They simplify the development and deployment of applications, provide a consistent environment across different platforms, and offer a flexible and scalable way to build modern applications. With the continued growth of containerization, Docker and Docker Desktop are essential tools for any developer or operator looking to stay ahead in the rapidly evolving world of software development.\nFor more information on Docker and Docker Desktop, head over to the Docker Downloads.\nIf you want a sample container to test your Docker / Docker Desktop, feel free to use my sample e-commerce workload container, which runs a sample .NET6 web app\nNow you have your Docker environment ready to use, let\u0026rsquo;s take the next step, moving the container image into Azure.\nAzure Container Registry (ACR) (https://learn.microsoft.com/en-us/azure/container-registry/) ACR is a managed private registry for storing and managing container images in the cloud. With ACR, you can store and manage Docker images for all of your containerized applications, making it easy to deploy and manage them in the cloud.\nThe purpose of ACR is to provide a secure and reliable way to store, manage, and deploy container images. By using ACR, you can ensure that your container images are stored securely in the cloud, and that only authorized users have access to them. ACR also provides built-in integration with all other Azure Container Services (see below), making it easy to deploy your container images across the different services.\nACR supports Docker Hub as well as DevOps environments as sources for container images, and it provides a seamless experience for pushing and pulling images from the registry. ACR also supports advanced features, such as geo-replication and image security vulnerability scanning, which allows you to replicate your images to multiple regions for high availability and scan your images for security vulnerabilities (Backed by Defender for Containers and Defender for Cloud).\nIn summary, ACR serves as a central repository for storing and managing your container images, making it easy to deploy and manage your containerized applications in the cloud. It provides a secure and reliable way to store your images, with built-in integration with other Azure container services for easy deployment.\nSome sample Azure CLI code to get you started:\n1 az acr create --resource-group myresourcegroup --name myacr --sku Basic Once you run this command, Azure will create a new ACR with the specified name and SKU in the specified resource group. You can then use the ACR to store and manage your container images.\nWith the ACR being ready, it\u0026rsquo;s time to upload (push) the Docker image into the registry. Here are a few steps to get you started:\n1 2 az login az acr login --name myacr Before you can push a Docker image to the Azure Container Registry, it needs to be updated with the name of the registry. You can use Docker tag command to help with this:\n1 docker tag pdetender/eshopwebmvc myacr.azurecr.io/eshopwebmvc followed by:\n1 docker push myacr.azurecr.io/eshopwebmvc wait for the upload to complete.\nAzure Container Instance (ACI) (https://learn.microsoft.com/en-us/azure/container-instances/) Azure Container Instance (ACI) is a serverless solution for running containers in the cloud. With ACI, you can deploy and manage containers without worrying about the underlying infrastructure. ACI is an excellent choice for running short-lived containerized tasks that don\u0026rsquo;t require orchestration, such as batch processing, job scheduling, or testing.\nACI is easy to use, as it doesn\u0026rsquo;t require any knowledge of container orchestration tools such as Kubernetes or Docker Swarm. Instead, you can use the Azure portal, Azure CLI, or Azure PowerShell to deploy and manage your containers.\nOne of the strengths of ACI is its cost-effectiveness. With ACI, you only pay for the exact amount of compute and memory resources that your containerized tasks require, measured in seconds. This makes ACI an ideal solution for running sporadic, bursty workloads.\nHowever, ACI has some limitations. First, ACI only supports running single containers or multi-container groups, not entire applications. Second, ACI doesn\u0026rsquo;t provide advanced features such as automatic scaling, self-healing, or load balancing. Finally, ACI has limited networking capabilities, as it doesn\u0026rsquo;t support virtual networks or custom IP addresses.\nwith the EshopOnWeb Docker image uploaded to the Azure Container Registry, use the following Az CLI command to deploy an Azure Container Instance:\n1 az container create --resource-group myResourceGroup --name aci-springclean-app --image myacr.azurecr.io/eshopwebmvc --cpu 1 --memory 1 --registry-login-server myacr.azurecr.io --ip-address Public --dns-name-label aci-springclean-app --ports 80 Azure Kubernetes Services (AKS) (https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes) Azure Kubernetes Services (AKS) is a managed Kubernetes service that allows you to deploy and manage containerized applications in the cloud. Kubernetes is a powerful open-source container orchestration tool that automates the deployment, scaling, and management of containerized workloads.\nAKS is an excellent choice for running complex, production-grade applications that require orchestration, such as microservices architectures or stateful applications. With AKS, you can take advantage of Kubernetes\u0026rsquo; advanced features, such as automatic scaling, self-healing, and rolling updates.\nAKS is easy to use, as it abstracts away the complexity of Kubernetes and provides an easy-to-use management interface. With AKS, you can deploy your Kubernetes clusters in minutes, using the Azure portal, Azure CLI, or Azure PowerShell.\nOne of the strengths of AKS is its scalability. With AKS, you can scale your clusters up or down based on demand, without worrying about the underlying infrastructure. AKS also provides advanced networking capabilities, such as virtual networks, load balancers, and custom IP addresses.\nHowever, AKS has some limitations. First, AKS is more expensive than ACI, as it requires more resources and management overhead. Second, AKS requires some knowledge of Kubernetes, which can be challenging for beginners. Finally, AKS may have some limitations in terms of customization, as it is a managed service that abstracts away some of the lower-level details of Kubernetes.\nHere are a few steps to get you started in deploying an AKS cluster:\n1 2 3 4 5 6 az aks create \\ --resource-group myResourceGroup \\ --name myAKSCluster \\ --node-count 2 \\ --generate-ssh-keys \\ --attach-acr \u0026lt;acrName\u0026gt; Deployment should take about 10-15 minutes, depending on the Azure region.\nOnce the AKS service is up-and-running, you can manage it using kubectl, the Kubernetes Command Line interface.\n1 az aks install-cli Next, connect to the cluster from Kubectl\n1 az aks get-credentials --resource-group myResourceGroup --name myAKSCluster From here, you can validate the Kubernetes cluster nodes:\n1 kubectl get nodes 1 2 3 4 5 $ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-37463671-vmss000000 Ready agent 2m37s v1.18.10 aks-nodepool1-37463671-vmss000001 Ready agent 2m28s v1.18.10 In order to get a containerized application runnig as a pod (=Kubernetes\u0026rsquo; terminology for a running container\u0026hellip;), you need to create a Kubernetes Manifest file, which uses a YAML syntax format.\nThere are a lot of options and configuration parameters possible, but below example should get you started: replace the image name pdtacr\u0026hellip; with the name of your ACR image\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 apiVersion: apps/v1 kind: Deployment metadata: name: secsample spec: replicas: 3 selector: matchLabels: app: secsample template: metadata: labels: app: secsample spec: containers: - name: secsample image: pdtacr.azurecr.io/simplcdotnet31:latest ports: - containerPort: 80 imagePullSecrets: - name: acr-auth --- apiVersion: v1 kind: Service metadata: name: secsample spec: type: LoadBalancer ports: - port: 80 selector: app: secsample This deploys 3 pods (replicas parameter) of the same container, across the 2 nodes in the cluster. The service will get published behind the default Azure Load Balancer, running on port 80. You can verify this later on using the public IP address of the service.\nSave the above file to your local machine, for example springcleanaks.yaml\nuse kubectl to import/inject the YAML configuration into the AKS cluster:\n1 kubectl apply -f springcleanaks.yaml After a few minutes, validate the running service by running the following kubectl command:\n1 kubectl get service secsample --watch If the IP-address mentions \u0026ldquo;pending\u0026rdquo;, give it a bit more time to load. Run the above command once more.\nThe outcome should know show both the internal pod IP, as well as the public IP. Open the browser to connect to the web application.\n1 secsample LoadBalancer 10.0.33.125 24.17.23.13 80:30676/TCP 67s If you want to play with the AKS Autoscaling features, I can recommend the following Microsoft Learn tutorial\nAwesome, you are making good progress\u0026hellip;\nOne could think you won\u0026rsquo;t need anything more than AKS, as it provides better high-availability, scalability as well as several other features, compared to the more standard Azure Container Instance. But you are wrong. While AKS is probably one of the more popular (if not the most popular\u0026hellip;) ways to run containerized workloads in Azure, it is sometimes complex, overwhelming, and just \u0026ldquo;too much\u0026rdquo; for what you need.\nLet\u0026rsquo;s have a look at 2 more services\u0026hellip;\nAzure App Services (https://learn.microsoft.com/en-us/training/modules/deploy-run-container-app-service/) Azure App Service is a platform-as-a-service (PaaS) offering that allows developers to build, deploy, and scale web applications and APIs quickly and easily. With App Service, developers can deploy web apps and APIs written in various programming languages, including .NET, Java, Node.js, Python, and PHP, among others. App Service provides built-in DevOps capabilities and integration with other Azure services, such as Azure SQL Database, Azure Redis Cache, and Azure Storage.\nOne of the features of Azure App Service is the ability to run Docker containers. Developers can package their application and its dependencies into a Docker image and deploy it to Azure App Service. Azure App Service can then run the Docker image as a container, providing all the benefits of containerization, such as portability, scalability, and isolation.\nSome of the benefits of running Docker containers in Azure App Service include:\nEasy deployment: With App Service, developers can deploy their Docker containers quickly and easily using various deployment options, such as Git, GitHub, Azure DevOps, or the Azure Portal.\nHigh availability: App Service provides built-in high availability, scaling, and load balancing capabilities, ensuring that containers are always available and responsive to incoming traffic.\nPlatform integration: App Service integrates with other Azure services, such as Azure SQL Database, Azure Redis Cache, and Azure Storage, making it easy to build end-to-end solutions with minimal effort.\nSecurity: App Service provides a secure and isolated environment for running Docker containers, with features such as network isolation, private networking, and Azure Active Directory authentication.\nHowever, Azure App Service is not the same as the previously discussed Azure Kubernetes Service (AKS), which is a container orchestration platform. AKS is designed for running and managing containerized applications at scale, with features such as automatic scaling, rolling updates, and self-healing. AKS is typically used for more complex applications that require multiple containers and need to be deployed across multiple nodes.\nIn summary, Azure App Service provides an easy and convenient way to run Docker containers in a PaaS environment, with built-in high availability, scalability, and integration with other Azure services. AKS, on the other hand, is a container orchestration platform designed for running and managing containerized applications at scale, with features such as automatic scaling, rolling updates, and self-healing.\nThe following Azure CLI code is what you need to get started with running a Docker image as an App Service:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Create a resource group az group create --name myResourceGroup --location eastus # Create an App Service plan az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1 --is-linux # Create an App Service az webapp create --name myAppService --plan myAppServicePlan --resource-group myResourceGroup --deployment-container-image-name \u0026lt;acr-name\u0026gt;.azurecr.io/\u0026lt;container-image-name\u0026gt;:\u0026lt;tag\u0026gt; --docker-registry-server-url https://\u0026lt;acr-name\u0026gt;.azurecr.io --docker-registry-server-user \u0026lt;acr-name\u0026gt; --docker-registry-server-password \u0026lt;acr-access-token\u0026gt; # Configure the App Service az webapp config appsettings set --name myAppService --resource-group myResourceGroup --settings DOCKER_CUSTOM_IMAGE_NAME=\u0026lt;acr-name\u0026gt;.azurecr.io/\u0026lt;container-image-name\u0026gt;:\u0026lt;tag\u0026gt; WEBSITES_PORT=80 # Set up continuous deployment az webapp deployment container config --name myAppService --resource-group myResourceGroup --enable-cd true --docker-registry-server-url https://\u0026lt;acr-name\u0026gt;.azurecr.io --docker-registry-server-user \u0026lt;acr-name\u0026gt; --docker-registry-server-password \u0026lt;acr-access-token\u0026gt; Azure Container Apps (https://learn.microsoft.com/en-us/azure/container-apps/overview) Azure Container Apps is a serverless platform for deploying and managing containerized applications. It is designed to simplify the deployment and management of microservices-based applications by providing a seamless experience for developers and operators.\nWith Azure Container Apps, you can deploy and manage multiple containers as part of a single application, without worrying about the underlying infrastructure. This makes it an excellent choice for running complex, multi-container applications that require orchestration.\nOne of the strengths of Azure Container Apps is its flexibility. With Azure Container Apps, you can use any container image from any registry, including Docker Hub, Azure Container Registry, or your own private registry. You can also define your application\u0026rsquo;s infrastructure as code using YAML or JSON files, which allows you to version control and automate the deployment process.\nAzure Container Apps also provides advanced features, such as automatic scaling, self-healing, and application-level load balancing. With Azure Container Apps, you can scale your application automatically based on demand, and Azure will handle the underlying infrastructure for you.\nHowever, Azure Container Apps has some limitations. First, Azure Container Apps is still in preview, so it may not be suitable for production-grade applications. Second, Azure Container Apps has some limitations in terms of customization, as it abstracts away some of the lower-level details of container orchestration. Finally, Azure Container Apps has a pricing model that may be more expensive than other Azure container services, as it charges based on the number of requests processed by your application.\nUse the following code example to get started with deploying an Azure Container Apps scenario:\n1 az containerapp create -n MyContainerapp -g MyResourceGroup --image myregistry.azurecr.io/myimage:latest --environment MyContainerappEnv --cpu 0.5 --memory 1.0Gi --min-replicas 4 --max-replicas 8 Summary As you can see, Azure offers several services that allow running containerized workloads, each with its own strengths and limitations. In this article, I walked you through different Azure services for container orchestration and management, including Azure Container Instance, Azure Kubernetes Services, Azure App Services, and Azure Container Apps. I tried to give you a few Azure CLI commands to get started and deploy baseline examples. This should allow you to validate your own opinions around which container service scenario to use for your specific business-critical or testing workloads.\nI hope you learned something from reading the article, enjoy the rest of the Azure Spring Clean topics!!\nCheers!!\n/Peter\n","date":"2023-03-15T00:00:00Z","permalink":"/post/dck-acr-aci-aks-aca-the-azure-container-alphabet-soup/","title":"Azure Spring Clean - DCK, ACR, ACI, AKS, ACA, the Azure Container Alphabet Soup"},{"content":"Building a Marvel Hero catalog app using Blazor Web Assembly\nIntroduction This article describes all the steps on how to develop a Marvel Hero catalog app, using Blazor Web Assembly, and is a companion guide to the Festive Tech Calendar 2022 session I presented. This app introduces Blazor .NET development, and more specifically how to easily create a Single Page App using HTML, CSS and API calls to an external API Service at https://developer.marvel.com\nAs I am learning development with .NET for the first time in 47 years, and succeeding in having an actual app up-and-running, I wanted to share my experience, inspiring other readers (and viewers of the session) to learn coding as well. And maybe becoming a passionate Marvel Comics fan as myself.\nI hope you enjoy the steps, feel free to contribute to this project at petender/FestiveBlazor2022live (github.com) if you want to co-learn more Blazor stuff together with me.\nPrerequisites If you want to follow along and building this sample app from scratch, you need a few tools to get started:\nVisual Studio 2022 to develop the application (VSCode or other dev tools will work as well) Community Edition can be downloaded for free here (Visual Studio 2022 Community Edition â€“ Download Latest Free Version (microsoft.com)) GitHub Account to store the application code in source control Sign Up for free here (https://github.com/join) Azure Subscription to run Azure Static Web Apps web application Get a Free Azure Subscription here (https://azure.microsoft.com/en-us/free/) Marvel Developer Account to get access to the API back-end Register for free at https://developer.marvel.com Deploying your first Blazor Web Assembly app from a template Visual Studio provides Blazor Web Assembly templates, both as an â€œempty templateâ€, as well as one with a functional â€œsample weather appâ€. Although I wonâ€™t use a lot from the template, I like to start with the weather app sample application, as it comes with all necessary building blocks to get started.\nLaunch Visual Studio 2022, and select Create New Project From the list of templates, select Blazor Web Assembly App Click Next to continue the project creation wizard Select .NET 7 (Standard Term Support) as Framework version Keep all other default settings as is Click Create to complete the project creation wizard and wait for the template to get deployed in the Visual Studio development environment. The Solution Explorer looks like below: Run the app by pressing Ctrl-F5 or select Run from the upper menu (the green arrow) and wait for the compile and build phase to complete. The web app should load successfully in a new browser window. Wander around the different parts of the web app to get a bit familiar with the features. The Home button brings up the index.razor page, and can be seen as the Homepage of the app. The + Counter Page demonstrates how you can build out interaction using buttons and running a count function. The Fetch data section shows a basic outcome of an API-call to a JSON-data-structure, to publish data in a gridview. Close the browser, which brings you back into the Visual Studio development environment. This confirms the Blazor Web Assembly app is running as expected In the next section, you learn how to update the index.razor page and add your own custom HTML-layout, CSS structure and actual runtime code.\nUpdating the template with your custom code Blazor allows you to combine web page layout code, basically HTML and CSS, together with actual application source code, in the same razor files. I canâ€™t compare it with previous development environments, but it seems to be one of the great things about Blazor â€“ and I really like it, since itâ€™s somewhat simplifying the structure of your application source code itself.\nAnother take is creating the web page layout first, and only adding logic later on. So letâ€™s start with creating a basic web page, adding a search field and a button\nYou can chose to reuse the index.razor sample page and continue from there, or create a new Razor Page and update the route path. For simplicity and ease of this scenario, Iâ€™m reusing the existing index.razor page. In this part, we start with adding a search field and a search button to the web page layout. Insert the following snippet of code: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 \u0026lt;**PageTitle**\u0026gt;Index\u0026lt;/**PageTitle**\u0026gt; \u0026lt;h1 class=\u0026#34;text-center text-primary\u0026#34;\u0026gt; Blazor Marvel Finder\u0026lt;/h1\u0026gt; \u0026lt;div class=\u0026#34;text-center\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;p-2\u0026#34;\u0026gt; \u0026lt;input class=\u0026#34;form-control form-control-lg w-50 mx-auto mt-4\u0026#34; placeholder=\u0026#34;Enter Marvel Character\u0026#34; /\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;p-2\u0026#34;\u0026gt; \u0026lt;button class=\u0026#34;btn btn-primary btn-lg\u0026#34;\u0026gt;Find your Favorite Marvel Hero\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; This adds the necessary objects on the web page. And letâ€™s run this update to see what we have for now. So the layout for the search part of the app is done. Letâ€™s move on with the design of the actual response / result items. The return from the Marvel API can be presented in a table gridview, but thatâ€™s not that nice-looking; I remembered having physical cards as collector items as a kid, so I did some searching for a similar digital experience. Interesting enough, there is a CSS-class object â€œcardâ€, which nicely reflects this experience. So letâ€™s add the next snippet of code for this response layout. Add the following code: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;row row-cols-1 row-cols-md-2 row-cols-lg-3\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;https://via.placeholder.com/300x200\u0026#34; class=\u0026#34;card-img-top\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card-body\u0026#34;\u0026gt; \u0026lt;h5 class=\u0026#34;card-title\u0026#34;\u0026gt;Marvel Hero Name\u0026lt;/h5\u0026gt; \u0026lt;p class=\u0026#34;card-text\u0026#34;\u0026gt; Character details \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; What this snippet does, is adding a â€œcontainerâ€ object, in which we create a small table view having 3 rows and 1 column. The card composition shows the Hero title on top, the Marvel Hero character details in the middle and an image of the character as well.\nLetâ€™s run the code again to test if everything works as expected. Now wait, we lose quite some time on stopping the app, updating code, starting it again â€“ so what we can do is use the new VS2022 feature called Hot Reload / if I set this to â€œHot Reload on Saveâ€, it will dynamically update the runtime state of the app based on my edits. Letâ€™s check it out. While in debugging mode, check the â€œflameâ€ icon in the menu: Enable the setting â€œHot Reload on File Saveâ€. Edit the card-title â€œMarvel Hero Nameâ€ to â€œMarvel Character Nameâ€ and check how the app refreshes itself without needing to stop/start. The search field is not doing anything yet, so we need to make sure â€“ that whenever we type something in that field, it kicks of an API call to the Marvel API back-end. First, we need to use the bind-value parameter for this field, linking it to a search task; Update the line with the field box as follows: 1 2 \u0026lt;input class=\u0026#34;form-control form-control-lg w-50 mx-auto mt-4\u0026#34; placeholder=\u0026#34;Enter Marvel Character\u0026#34; @bind-value=\u0026#34;whotofind\u0026#34; /\u0026gt; add @bind-value=â€whotofindâ€ at the end of the line\nIgnore the errors regarding the â€œwhotofindâ€ for now.\nNext, we need to update the button code to actually pick up an action when clicking on it; this is done using the @onclick event\nAdd @onclick=FindMarvel\nThe code snippet complains about unknown attributes, which is what we need to add in the actual code section of the app page: Those were the 2 placeholders for the Blazor code section, which can be defined within the same Razor page, a rather unique approach to Blazor code. Save the updates again; notice how Hot Reload is not able to refresh the changes just like that, since it is more than just a cosmetic change in HTML. Click Edit for now, since we will add more code to the Page. Add the following @code section below the HTML/CSS layout\nWithin the curly brackets, we can use regular C# code Start with defining a string for the â€œwhotofindâ€ Followed by defining a method (task) for the FindMarvel onclick action â€“ for now, letâ€™s write something to the console to validate our search field is working as expected The code syntax looks like this:\n1 2 3 4 5 6 7 8 9 private string whotofind; private async Task FindMarvel() { Console.WriteLine(whotofind); } The string â€œwhotofindâ€ refers to the search field object, where the Task â€œFindMarvelâ€ refers to the button click action. So easy said, whenever we click the button, it will pick up the string content from the search field, and send it to the Marvel API back-end. As we donâ€™t have that yet, Iâ€™m just writing the data to the console, which is always a great test to validate the code is working as expected. Save the file, which will throw a warning regarding the hot reload. Since we added new actual code snippets, hot reload canâ€™t just go and recognize it. So a reload is neededâ€¦ Select â€œRebuild and Apply Changesâ€\nEnter the name of a Marvel character, for example â€œthorâ€, which will move that to the Output console. This confirms both the bind-value property as well as the search button and corresponding action behind it is working as it should be.\nI think the app is ready from our perspective, so itâ€™s time to set up the Marvel API-part of the solution in the next section.\nConfiguring the Marvel Developer API Backend head over to the Marvel Developer website https://developer.marvel.com and grab the necessary API information. Select Create Account + Accept Terms \u0026amp; Conditions\nGrab the API keys (public \u0026amp; private)\nPublic: 579a41c9eccaf70a3a09c1xxxxxxxxxxx\nPrivate: 6362bd53a4c307c96fb27xxxxxxxxxx\nTo allow requests to come into the Marvel API back-end, you need to specify the source URL domains where the requests are coming from. Add localhost here, which is the URL you use for all testing on your development workstation. Later on, once the app runs in Azure, you need to add the Azure Service URL here as wellâ€¦\nOnce set up, head over to â€œinteractive documentationâ€, and walk through the different API placeholders and keywords one can use, to show the capabilities. For the app later on, we will use the â€œnamestartswithâ€, as it is the most easy to use â€“ names could work, but it requires knowing the explicit name of the character, and having it correctly spelled. Click the â€œTry it outâ€ button. The result shows the outcome + the exact URL that was used: Blazor Web Assembly already has an HTTP Client built-in, although if you want, you could also find Nuget packages that provide similar functionality â€“ but for now, letâ€™s stick with the built-in one. The details of this service are part of the program.cs file The hostenvironment points to our local development workstation, so the only thing we need to do hear is changing this Uri to the Marvel API Gateway Uri, as follows: https://gateway.marvel.com:443/v1/public/\"\nbuilder.Services.AddScoped(sp =\u0026gt; new HttpClient { BaseAddress = new Uri(\u0026ldquo;https://gateway.marvel.com:443/v1/public/\") });\nnext, relying on Blazor dependency injection, create a reference to the HTTPCLient in your Blazor index.razor page 1 2 3 4 5 6 @page \u0026#34;/\u0026#34; @inject HttpClient HttpClient \u0026lt;**PageTitle**\u0026gt;Index\u0026lt;/**PageTitle**\u0026gt; As you could see from the Marvel output, they are using JSON; this means, when calling the HttpClient, we also receive a JSON object back, which is not useful for presenting the data as such. What we need to do is deserialize the result, for which we create a class A useful website for helping with this, is jSON2CSharp.com, allowing you to paste in a JSON payload, which gets converted to c# class structure In the VStudio project, create a new folder â€œModelsâ€, and add a new Item in there, called MarvelResult.cs We could copy the content from the JSON deserialize output into this class object, but for this sample, we donâ€™t need all the provided data by Marvel â€“ so I made some changes and ended up with the core pieces of data I want, like image, name, description The code snippet looks like follows:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 { public class MarvelResult { public string AttributionText { get; set; } public Datawrapper Data { get; set; } public class Datawrapper { public List\u0026lt;Result\u0026gt; Results { get; set; } } public class Result { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public Image Thumbnail { get; set; } public class Image { public string Path { get; set; } public string Extension { get; set; } } } } } With the class in place, letâ€™s update the code to compile the dynamic URL, instead of the fixed gateway.marvel.com one. First, we need to add a private MarvelResult, reflecting the data class we just created; 1 private MarvelResult _marvelResult; as we stored this in a different folder within the application source code, we also need to update our Page details, telling it to â€œuseâ€ the Models subfolder to find it. This is done using the @using statement on top of the index.razor Where now the Class gets nicely recognized\nLetâ€™s update the Task FIndMarvel, with the required code snippet to recognize the dynamic URL to connect to, as well as calling the HttpClient function As per the Marvel API docs, we need to integrate the api Private key into our URL search string, so we have to define the string for this 1 2 3 4 5 6 7 8 9 @code { private MarvelResult _marvelResult; private string whotofind; private string MarvelapiKey = \u0026#34;579a41c9eccaf70a3a09c1722ef6c2fc\u0026#34;; After which we can update the Task FindMarvel as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 private async Task FindMarvel() { Console.WriteLine(whotofind); var url = $\u0026#34;characters?nameStartsWith={whotofind}\u0026amp;apikey={MarvelapiKey}\u0026#34;; _marvelResult = await HttpClient.GetFromJsonAsync\u0026lt;MarvelResult\u0026gt;(url, new System.Text.Json.JsonSerializerOptions { PropertyNamingPolicy = System.Text.Json.JsonNamingPolicy.CamelCase }); } Where the url is coming from the gateway.marvel.com part in the HttpClient service definition + the dynamic url part where we specify the characters search option, the nameStartsWith, pointing at the bind-value object whotofind, and adding the MarvelapiKey string. While all the code pieces are done, note that .NET6 started checking for Nullable values. This is what the green squickly lines are identifying. What this means is that the value could be equal to null, which could potentially break your application, since it expects to have a real value in there. I wouldnâ€™t recommend it to change, but for this little sample app, it would be totally OK to disable the nullable check. This can be done from the Properties of the Project Thatâ€™s all from a code snippet perspective, where now the last piece of updates is back into the HTML Layout of the web page itself, updating the content of the card object: Since we most probably get an array of results back, meaning more than one, we need to go through a â€œfor eachâ€ loop; also, there might be scenarios where we are not getting back any results (like the character doesnâ€™t exist, a typo in the characterâ€™s name,â€¦), so we will add a little validation check on that too, using an if = !null Letâ€™s go ahead!\nAt the top of the card object (class=container), or right below the section where we defined the search button, insert the @if statement, and move the whole div section between the curly brackets 1 2 3 4 5 @if (_marvelResult != null) { \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; Next, define the @foreach loop for the actual card item, and update the image placeholder URL with the content from the MarvelResult JSON string (thumbnail path and extension: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 @foreach (var result in _marvelResult.Data.Results) { \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;@($\u0026#34;{result.Thumbnail.Path}.{result.Thumbnail.Extension}\u0026#34;)\u0026#34; class=\u0026#34;card-img-top\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;card-body\u0026#34;\u0026gt; \u0026lt;h5 class=\u0026#34;card-title\u0026#34;\u0026gt;@result.Name\u0026lt;/h5\u0026gt; \u0026lt;p class=\u0026#34;card-text\u0026#34;\u0026gt; @result.Description \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; } Run the app and see the result in action Thatâ€™s it for now. Great job! Making the cards â€˜flipâ€™ Note: this part is left out of the Festive Tech Calendar presentation to keep the video within the expected time â€“ what weâ€™re doing here is integrating more CSS layout components on to a new Page in the web app, which provides a more dynamic look-and-feel to the Marvel cards we have.\nWhile CSS can be difficult â€“ and trust me it is â€“ I literally googled for â€œflipping cards CSSâ€ and found a snippet of code on https://w3schools.com, and it worked almost straight awayâ€¦\nHere we go:\nLetâ€™s copy the current state of the page we have, and store it in a different page; so we grab index.razor and copy/paste it to flip.razor this will allow me to also demonstrate some other Blazor features around Menu Navigation and how to use object-specific css; meaning, CSS that will only be picked up by the specific page, and not interfere with the rest of the application CSS we already have.\nOpen flip.razor page; First thing we need to change, is the Page Routing, pointing to the â€œ/flipâ€ routing directory instead of the â€œ/â€, as that one is linked to the index.razor page.\nGo to this link: https://www.w3schools.com/howto/tryit.asp?filename=tryhow_css_flip_card\nSelect the code between the tags\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 \u0026lt;style\u0026gt; body { font-family: Arial, Helvetica, sans-serif; } .flip-card { background-color: transparent; width: 300px; height: 300px; perspective: 1000px; } .flip-card-inner { position: relative; width: 300px; height: 300px; text-align: center; transition: transform 0.6s; transform-style: preserve-3d; box-shadow: 0 4px 8px 0 rgba(0,0,0,0.2); } .flip-card:hover .flip-card-inner { transform: rotateY(180deg); } .flip-card-front, .flip-card-back { position: absolute; width: 300px; height: 300px; -webkit-backface-visibility: hidden; backface-visibility: hidden; } .flip-card-front { background-color: #bbb; color: black; } .flip-card-back { background-color: #2980b9; color: white; transform: rotateY(180deg); } \u0026lt;/style\u0026gt; and paste this under the @using section and the section of the code you already have (Note: ignore the @using marveltake2.models in the screenshot, itâ€™s the name of my test project) Next, we need to update the layout of the card item itself, in the section within the â€œforeachâ€ loop, as thatâ€™s where the data is coming in, and getting displayed @foreach(var result in _marvelResult.Data.Results)\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 { \u0026lt;div class=\u0026#34;col mb-4\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card-inner\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;flip-card-front\u0026#34;\u0026gt; \u0026lt;img class=\u0026#34;thumbnail\u0026#34; src=\u0026#34;@($\u0026#34;{result.Thumbnail.Path}.{result.Thumbnail.Extension}\u0026#34;)\u0026#34; style=\u0026#34;width:300px;height:300px;\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;flip-card-back\u0026#34;\u0026gt; \u0026lt;h5\u0026gt;@result.Name\u0026lt;/h5\u0026gt; \u0026lt;p\u0026gt; @result.Description \u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; } What we do here is basically pointing to the different CSS-snippets for each style we want to get applied; we have the flip-card div class, next the flip-card-inner and flip-card-front. For the front, we want to use the image, so we keep the img class details as is, but change the width and height to 300px, to make sure it looks like a nice rectangular on screen.\nNext, we add a class for the flip-card-back, where we will show the Marvel character name and description.\nThatâ€™s all we need to have for now; so letâ€™s have a look, by launching the app\nSince the previous page was index.razor, itâ€™s getting loaded by design (from the index.html). so we need to update the URL to pick up the /flip page, by adding it to the end of the URL, such as https://localhost:7110?flip (note, the port number will be different on your end) Search for a character, and see the outcome cards: About the same as before, but letâ€™s hoover over a card: It flips and shows the character name and description (if provided by Marvel) to the back of the card Cool!!\nLetâ€™s switch back to the code and add a menu item for the â€œflipâ€ page to our left-side navigation menu. Open the file NavMenu.razor within the Shared folder. Add a new section for this menu item, by copying one from above + make minor changes to the href reference (flip) and change the Menu item word to Flip The icons are coming from the open iconic library, which is also referenced as part of Blazor bootstrap. Know you can change to MudBlazor, Telerik Progress or several other bootstrap frameworks to have layout-rich styles. Open https:://useiconic.com and find a suitable icon, for example loop-circular 1 2 3 4 5 6 7 8 9 10 \u0026lt;div class=\u0026#34;nav-item px-3\u0026#34;\u0026gt; \u0026lt;**NavLink** class=\u0026#34;nav-link\u0026#34; href=\u0026#34;flip\u0026#34;\u0026gt; \u0026lt;span class=\u0026#34;oi oi-loop-circular\u0026#34; aria-hidden=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/span\u0026gt; Flip \u0026lt;/**NavLink**\u0026gt; \u0026lt;/div\u0026gt; When you run the app again, the new Menu item will appear. Given the href=â€flipâ€, it will redirect to the base URL (https://localhost:7110) /flip route Since we are changing the layout a bit here, why not modify the default purple color from the Blazor template, to the well-known Marvel dark-red?\nOpen MainLayout.razor Notice the Paste in the following style object: 1 \u0026lt;div style=\u0026#34;background-image:none;background-color:darkred;\u0026#34; class=\u0026#34;sidebar\u0026#34;\u0026gt; This changes the default purple color to darkred. This completes our development part. Letâ€™s move on to the next step, and integrate our app code with GitHub Source Control (which actually should have happened at the start, before writing a single line of code â€“ but hey, itâ€™s a sample scenario right) Integrating Visual Studio with GitHub Source Control With that, letâ€™s close this project and save it to GitHub; so you can grab it as a reference. From the explorer, click â€œGit changesâ€ tab and select Create GitHub Repository Click Create and Push, and provide a description as commit message (I typically call this first action the â€œinitâ€).\nWait for the git clone action to complete successfully. Connect to the GitHub repository and confirm all source code is there.\nNote: the actual source code I used for the Festive Tech Calendar presentation can be found here: petender/FestiveBlazor2022live (github.com)\nWhenever you would make changes in the source code in Visual Studio and save the changes, Git Source Control will keep track of these and allowing you to commit the changes into the GitHub repository. I would recommend you to commit changes frequently, basically after each â€œimportantâ€ update to the code. Publish Blazor Web Assembly app to Azure Static Web Apps In this last section, I will show you how to publish this webapp to Azure Static Web Apps, a web hosting service in Azure for static web frameworks like Blazor, React, Vue and several other.\nFrom the Azure Portal, create new resource / static web app\nProvide base information for this deployment:\nResource group â€“ any name of choice\nName of the app â€“ any unique name for the app\nSource = GitHub\nPlan = Free\nRegion = any region of your choice\nScroll down and authenticate to GitHub; Next, select your source repo in Github where the code is stored (the one we just created)\nClick Build Details to provide more parameters regarding the Blazor app itself. Note you need to change the default App location from /Client to /, since our source code is in the root of the Blazor Web Assembly, without using ASP.Net hosted back-end.\nOnce published, it will trigger a GitHub Actions pipeline to publish the actual content\nThe YAML pipeline code is stored in the .github/workflows/ subfolder within the GitHub repository. You shouldnâ€™t need to update this file though. It just works out-of-the-box.\nCheck in Actions whatâ€™s happening:\nOpen the details for the Build \u0026amp; Deploy workflow\nSelecting any step in the Action workflow will show more details:\nWait for the workflow to complete successfully.\nNavigate back to the Azure Static Web app, click itâ€™s URL and see the Blazor Web App is running as expected.\nWhen searching for a Marvel Character, this throws an error though, which can be validated from the Inspect option of the browser:\nRemember at the start, where we configured the API calls at the Marvel Developer site, we needed to specify the source URLs from where the calls are allowed. This Azure Static Web App URL is not configured. (Hence why I didnâ€™t worry too much about including my APIKey as hard-coded string in the source code).\nClick Update to save those changes. Trigger a new search, which should reveal the actual Marvel character details. Remember you can use both the default (index) page, as well as the flip page.\nSummary In this article, I provided all the necessary steps to build a Blazor Web Assembly application. Started from the default template, you updated snippets of code to create a search field and corresponding action button to trigger the search. You learned about using HTTPClient to interact with an external API Back-End. Once this was all working, you looked into using some additional â€œflip cardâ€ CSS layout features, and how to update the Blazor Navigation Menu.\nOnce the development work was done, we saved the code in a GitHub repository.\nLast, you deployed an Azure Static Web App, interacting with the GitHub repository to pick up the source code and publish it using GitHub Actions workflow.\nI would like to thank the organizing team of Festive Tech Calendar 2022 for having accepted my session submission for the 3rd year in a row. Especially since this was my first attempt to do some (semi)live coding, to share my excitement of how I learned to write and build code at age 47. Iâ€™m already brainstorming on what Blazor app I can share in next yearâ€™s edition\nHappy Holidays everyone!\n/Peter\n","date":"2022-12-28T00:00:00Z","permalink":"/post/steps-for-blazor-marvel---webassembly/","title":"Festive Tech Calendar 2022 - Building a Marvel Hero app using Blazor Web Assembly and Azure Static Web Apps"},{"content":"I have to admit, I am not a real book reader, and the bit of reading I am doing typically involves Azure-related tech books, or what my wife and family describes as \u0026ldquo;business books\u0026rdquo; (Biographies, Non-fiction company stories such as the start of Netflix, Uber, Silicon Valley,\u0026hellip;). For several years, I relied on an Amazon Kindle app on my (cheap) Samsung A8 tablet, dating from the time I was traveling weekly and wanted to travel light. While it still runs fine, the 8\u0026quot; form factor is sometimes a bit small - especially when screenshots of development code are involved - and I was also missing the capability to take notes (apart from the basic notes in Kindle app).\nApart from reading e-books, I\u0026rsquo;m also a big fan of Moleskine writing pads and pens, especially their Smart Writing System Kit, which comes with a Bluetooth-enabled pen, yet is just like a regular ink-based pen, and allows for your writings to be stored electronically per page.\nBut for a long time, it felt like having 2 devices was too much, since I was taking notes using Moleskine, while reading from the Samsung. Often, I didn\u0026rsquo;t have both devices with me (reading books in the bedroom, where the Moleskine was in my office\u0026hellip;)\nI honestly had my eyes on e-reader devices for a while, specifically Remarkable. However, as I would mainly use that device for reading, I found them too expensive and couldn\u0026rsquo;t spend the money on it (other expenses too, you know\u0026hellip;)\nUntil I spotted a Twitter post from Scott Hanselman, offering an Onyxboox Note Air 2 for sale. This was another device I had my eyes on for a while.\nSo when I saw this post, I DM-ed Scott and the nice gentleman he is, we easily closed the deal, for a price I was willing to pay (and I still owe him a lunch/dinner too).\nAs soon as it arrived in the mail a few days later, I started using it. What pulled me in, was the Onyx Boox Reader feature, which allowed me to read my traditional PDF-documents, but - more important - allowing me to take handwritten notes on the side, circling words or parts of a paragraph, to emphasize text parts that are important to remember.\nNext to the Boox Reader app, one of the other convenient things about the device is BooxDrop, a built-in app which allows for copying files from your local machine onto the Boox device using just a wifi connection, which is super convenient.\nSince the device is running on Android, it also allows for installing about any regular Android App, including Amazon Kindle Reader App. This was a big plus for me, since I\u0026rsquo;ve been using that platform for buying most of my e-books over the years. The only downside though, is that note-taking is still using the Kindle-way as before, which annoyed me at first (as fluent note-taking on books was one of the things that got me interested in the device since the beginning\u0026hellip;); The solution I have in mind for newer books, is switching from buying Kindle format back to PDF, and reading them from the great Boox Reader app.\nWhile I haven\u0026rsquo;t used it too much for other things than reading, the built-in Notes app almost feels like writing on paper. The complementary pen obviously is a big part of this. It allows for drawing, recognizing different grey-scales depending on how hard you push the pen on the screen, and it also provides handwriting to actual text transformation. I started using the Notes more and more during my day as well, where before I was writing on paper. Often during a training, I get questions from learners, which is a great source for blog post inspiration. Too often though, I threw away those pieces of paper at the end of the week. Now I have them on the device, and can cross them off once the blog post is published.\nOne last thing I want to highlight, is the great battery-life of the Onyx Boox. Even with an average reading-time of about an hour per day, the battery lasts for weeks. I can\u0026rsquo;t even remember when I charged the device for the last time\u0026hellip; maybe not even since I got it to be honest.\nI want to thank Scott for his kindness and convincing me about several features of the device in DMs before I deciced to buy it. I can say that Scott made me read more books; heck that would have been a hell of a blog post title :)\nI have to go now, as I am just starting my next book, C#11 and .NET 7 development.\nIf you have been looking for a convenient E-reader with some additional note-taking features, I can definitely recommend the OnyxBoox Note Air 2 product. More info can be found on The Onyx website.\nCheers!!\n/Peter\n","date":"2022-12-23T00:00:00Z","permalink":"/post/how-boox-got-me-back-to-reading/","title":"How OnyxBoox - with some help from a friend - got me back to reading more... books"},{"content":"This post is a short one, but something that has been bugging me for a while, where I didn\u0026rsquo;t even know how to fix it.\nAs I am deploying a lot of different Azure Resources every week while delivering Azure trainings, I also run a lot of deletions during the week, or at least by the end of the week. To repeat about the same process the week after when doing another Azure training delivery.\nOne of the issues I was facing, although not a big big deal, is the \u0026ldquo;Recent Resources\u0026rdquo; list on the Azure Portal Homepage.\nWhile I\u0026rsquo;m sure it is a very useful feature for more traditional Azure admins, it\u0026rsquo;s less interesting when the resources in there are not available anymore, because of the cleanup tasks I\u0026rsquo;m running.\nSo I was happily amazed when I talked about this with someone from the Azure team in a totally different context call earlier this week, sharing me the following solution:\nFrom the Azure Portal - Search bar, search for \u0026ldquo;Recent\u0026rdquo; This shows a list of all recent Azure Resources you connected to from the Azure Portal.\nNotice the Clear menu option, and click on it. Close this section, and return to the Azure Portal Homepage. Notice how the Recent Resources list is now nicely cleaned up :) A little gem if you ask me, saving me a little frustration in my next delivery\u0026hellip;\nThat\u0026rsquo;s it for now folks!!\n/Peter\n","date":"2022-12-23T00:00:00Z","permalink":"/post/how-to-clear-the-azure-portal-recent-resources-list/","title":"How to clear the Azure Portal Recent Resources list"},{"content":"In this post, I want to share my review of another Blazor book I read recently, Building Blazor WebAssembly Applications with gRPC this time from Vaclav Perakek, published by Packt Publishing and available on Amazon as well as other e-book subscription platforms.\nIf you have been following me for a while, you know I\u0026rsquo;m gradually learning more about coding and developing applications, especially using the Blazor .NET framework.\nWhat intrigued me even more with this book, is the gRPC integration. While I heard about before, from far away honestly, I never really looked into it. So besides learning more about Blazor itself, seeing how other much more advanced developers are using the framework, as well as learning on how they write code, I also learned more about what gRPC is all about.\nWhat is gRPC gRPC has been developed by Google, and described as a high performance Remote Procedure Call RPC framework. (I remember \u0026rsquo;traditional\u0026rsquo; RPC from my long gone Exchange Server consultant days\u0026hellip;). using gRPC, a client application can directly call a method on a foreign server back-end, as if it were a local object to the client, making it a perfect choice for distributed applications and services-oriented architecture. As with any similar RPC-based system - such as in my Exchange Server past - the concept starts from defining a service, specifying the methods that can be called remotely, together with defining the parameters and return types. On the server side, that\u0026rsquo;s where the service interface is running, and the gRPC server component handles the requests.\ngRPC is supported across all popular development languages, such as Java, Ruby, Go, Python,\u0026hellip; and now also in Blazor .NET.\nIf you want to learn more about gRPC, head over to the gRPC official docs.\nWhat is Blazor WebAssembly Blazor is a high-performance web development framework, created by Microsoft, and part of the broader .NET language family. It allows developers write applications using the familiar C# language. The applications are supported in all modern web browsers using the WebAssembly technology. Where developers would look into JavaScript before, they can now build the same Single Page Applications (SPA) using C-sharp dotnet language. Blazor exists in both WebAssembly (browser-only) runtime, as well as Blazor Server, where it runs on an ASP.NET Server back-end.\nIf you want to learn more about Blazor, you might have a look at some of my former blog posts on how to get started:\nhttps://www.007ffflearning.com/post/efficiently-handling-secrets-as-a-blazor-.net-developer/\nhttps://www.007ffflearning.com/post/deploying-blazor-apps-using-dotnet-commandline/\nhttps://www.007ffflearning.com/post/coding-apps-in-blazor-from-a-non-developer/\nhttps://www.007ffflearning.com/post/coding-apps-in-blazor-from-a-non-developer-part-2/\nBook Review With that out of the way, let\u0026rsquo;s have a look at what the book has to offer.\nI loved going through the book, as it is hands-on from the start. Kicking off the project, starting from the Blazor WASM template in Visual Studio / VSCode, and actually heavily cleaning it up, so you are almost starting from a blank canvas, you learn how to build a web application front-end, which connects to a SQL Server back-end. Without gRPC, this would probably be relying on a REST API call, so that was a nice differentiator for me to learn about.\nAlready in the first chapter, Vaclav is jumping into code snippets, clearly explaining how it works, but also often explaining the reasoning behind it. So instead of just copy/pasting code into your own applications, you can almost look into his brain and way of thinking, which helped me understanding the concepts much better.\nChapter 2 is where you create your first Blazor Web Assembly Application, starting from a template, but heavily customizing to a workable application example. Chapter 3 describes Entity Framework as a process to create a database back-end, and how to interact with it.\nChapter 4 brings the two worlds together, using REST API calls, allowing for CRUD operations from the web application towards the database. This was really helpful for me, as I haven\u0026rsquo;t done much around interacting with an actual database to create, update or delete information. While the sample app we\u0026rsquo;re building is around movies and viewers, the concept is valid for about any database-type you could think off (online webshop, HR application with employee data, overall customer information management, etc\u0026hellip;)\nChapter 5 is where the gRPC integration becomes important. You learn how to build the gRPC services on the server-side, as well as how to consume them from the web app client-side. This was mind-blowing to me, as it was something totally new in my knowledge spectrum. While functionally you are doing the same as with REST, this somewhat felt easier to develop, and the performance seemed better (as in pulling up data from the database\u0026hellip;). While my recordset was quite small, I can see a big performance increase here for real-life applications with thousands or tens of thousands of records to work with continuously.\nHaving arrived at this point, I think you could say you should have learned enough to continue your own journey On how to build more complete, powerful WebAssembly-based client applications, connecting to a database server back-end. The possibilities are unlimited.\nHowever, Vaclav didn\u0026rsquo;t stop here, but continued the book with a chapter on Source Generators. As he explains, this technology allows for generating source code automatically, so basically helping developers adding more functionality into applications, without needing to write all the code yourself.\nIn the last Chapter 7, Vaclav shares some best practices on how to use gRPC together with C#.\nSummary While this book wasn\u0026rsquo;t the largest (about 165 pages), it allowed me to learn more new things about what it takes to build WebAssembly-based web applications, using gRPC instead of the more traditional REST API method. I\u0026rsquo;m still not an experienced developer, but it teased me to look into more capabilities of Blazor, as well as how to build more services-oriented applications.\nI would recommend this book to developers who are new to Blazor like myself, but it is definitely also a good read for more experienced developers who want to learn more about gRPC-based communication between client/server.\nI\u0026rsquo;m off now, providing my 5-star review on Amazon for this book.\nSee you later folks!!\nCheers!!\n/Peter\n","date":"2022-12-17T00:00:00Z","permalink":"/post/packt-book-review---blazor-wasm-with-grpc/","title":"Book review - Building Blazor WebAssembly Applications with gRPC"},{"content":"Hey readers,\nThis is actually a less technical post, although it can lead to a lot of technical resources. I honestly wrote this blog for myself this time, as I struggled in finding a way to get free books in my Packt Publishing subscription. I had credits, but I always forget how to claim them :). So I guess I\u0026rsquo;m not the only one having that problem.\nWhat are Packt Credits, and how to get them Packt allows for one-time paying for a book, getting a monthly paid subscription, a 12-month or 18-month one. It\u0026rsquo;s with these last 2 you get a Credit added to your account once per month, as well as earning an additional credit each month, upon completing 40 sections of learning. So easy said, the more sections (chapter units) you read monthly, the more credits you get as a bonus.\nEven if you are on a monthly subscription, while you don\u0026rsquo;t get a credit automatically, you can still get bonus credits for every 40 sections of learning completed.\nOK, you got credits, but how to use them This was the part where I struggled, and every few months when trying to use them. I know I can use them, but the steps aren\u0026rsquo;t stamped to my brain. (one can wonder if that\u0026rsquo;s something wrong with my brain, or with the process ;)\nLog on to Packt Subscription. From the upper right corner, select My Library, followed by selecting credits This shows the number of credits you have available. From the \u0026ldquo;search\u0026hellip;\u0026rdquo; field, find any topic of book of your interest, and select it from the list of search results. This opens the detailed view of the book\u0026rsquo;s chapters, author details, price, etc. From the navigation pane to the right, select the first chapter title. This redirects you to another details section of the selected book, but showing a link to use credits this time. Confirm the popup message, asking if you want to use a credit for buying the book. This worked :). To access the actual book, go back to the My Library menu in the upper right corner, and select $ Owned Select the book you just bought from the list of available books you own Besides reading online, you can now also download the book in its available format of choice (typically PDF and/or EPUB). From the reading view, navigate to the menu bar shown to the right, and select download That\u0026rsquo;s it!! While going through these steps, it isn\u0026rsquo;t all too hard to buy the book using the credits you have. To me though, it would make more sense to have the \u0026ldquo;buy using credits\u0026rdquo; option already showing up on the home page of the book summary table, instead of needing to click through multiple times before finding it.\nIf you liked this article, feel free to mention me in a Tweet or Toot, and you might win yourself a free Packt e-book from one of my credits.\nCheers!!\n/Peter\n","date":"2022-11-12T00:00:00Z","permalink":"/post/using-packt-credits-for-free-books/","title":"Using Packt Publishing Credits for free books"},{"content":"For about 3 years now, I\u0026rsquo;ve been running this personal blog site using Hugo, running on Azure Static Storage Sites and Azure Front Door as load balancer/TLS protection service.\nAbout 18 months back, Microsoft released Azure Static Web Apps, a platform built for exactly that, hosting Static Web sites, such as Vue, React, Angular, Svelte, .NET Blazor, and also\u0026hellip; Hugo :).\nAs I had to migrate my resources to a new Azure subscription and tenant, I thought this was a perfect moment to migrate to SWA. While the process was surprisingly smooth, I wanted to blog about it, to help and convince others who are in the same situation as myself, giving confidence how easy it actually is.\nIn short, the process involves the following:\nhave a backup (copy) of the Site files in Azure Storage Account. If you don\u0026rsquo;t have them in a GitHub or Azure DevOps repository already, look into the free Azure Storage Explorer tool to copy the data aside to your local machine.\nDepending on your DevOps platform of choice (both GitHub and Azure DevOps are supported), you need to have a repository available already to be used for Azure Static Web Apps.\nDeploy a new Azure Static Web Apps resource from the Azure Portal, as follows:\na) Create new Resource / Static Web Apps\nb) Complete the necessary project details:\nSubscription Azure Resource Group Unique name for the Static Site App Hosting Plan - Free which gives all you need for running Hugo with a public SSL/TLS certificate and hostname c) Next, provide the necessary deployment details. Notice SWA relies on a DevOps pipeline process, which can be GitHub or Azure DevOps. The pipeline basically gets triggered to compile the Hugo Markdown files (your blog article) into HTML-files, and gets triggered every time something is changed in the repository (like when you write a new blog post, delete a post or update a post\u0026hellip;)\nIn my setup, I chose Azure DevOps, but the flow is the same in GitHub.\nd) Confirm the creation of the resource, and give it a few minutes. Once created, navigate to the new Static Web App resource blade:\ne) From here, notice the edit workflow section, which points to a CI/CD Pipeline Yml file. This is the actual \u0026ldquo;engine\u0026rdquo; doing all the work. Open this link.\nthis is what it looks like in my scenario:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 name: Azure Static Web Apps CI/CD pr: branches: include: - main trigger: branches: include: - main jobs: - job: build_and_deploy_job displayName: Build and Deploy Job condition: or(eq(variables[\u0026#39;Build.Reason\u0026#39;], \u0026#39;Manual\u0026#39;),or(eq(variables[\u0026#39;Build.Reason\u0026#39;], \u0026#39;PullRequest\u0026#39;),eq(variables[\u0026#39;Build.Reason\u0026#39;], \u0026#39;IndividualCI\u0026#39;))) pool: vmImage: ubuntu-latest variables: - group: Azure-Static-Web-Apps-gentle-desert-046399d10-variable-group steps: - checkout: self submodules: true - task: AzureStaticWebApp@0 inputs: azure_static_web_apps_api_token: $(AZURE_STATIC_WEB_APPS_API_TOKEN_GENTLE_DESERT_046399D10) ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### Normally, you shouldn\u0026rsquo;t have to change anything on this Yml pipeline file, unless your Hugo theme tells you to make alterations. In short, whenever there is a change (\u0026ldquo;a trigger\u0026rdquo;) in the content (\u0026ldquo;include main\u0026rdquo;), it runs the job and relate task (\u0026ldquo;AzureStaticWebApp@0\u0026rdquo;). This is run on an Azure DevOps build agent, an Azure-running Ubuntu Virtual Machine, with all necessary software and tools needed to compile the website updates.\nWait for your pipeline to complete, and run successful. When it\u0026rsquo;s the first time, it will most probably fail, since there is no data to compile yet. Let\u0026rsquo;s fix this!!\nFrom the DevOps environment, go to Repos (Github or Azure DevOps), and clone this repo to your local machine. I\u0026rsquo;m using Visual Studio Code, as it\u0026rsquo;s a brilliant MarkDown editor with Git integration out-of-the-box. Once the repo got cloned, copy all the folders and files from your backup, into this new repo folder. This will be recognized as a \u0026ldquo;folder change\u0026rdquo; by the Git source control process, and asking you to commit the changes and synchronize back to your repository. Perform both steps in sequence. followed by the Sync Changes process - which uploads all changed files from your local machine to the DevOps repo. From the DevOps environment, validate the Hugo folders and files are present in the repository. Given the automatic trigger, the Pipeline will be picking up the change and executing a new run. Wait for this to complete successfully. Connect to the Azure Web App resource URL (something like https://gentle-desert-123456789.2.azurestaticapps.net/) as in my case, and behold your blog website is live!! While this completes the successful migration of the Hugo blog site, we are not 100% done yet. As for now, it is only listening on the internal SWA web address, which we should update to a public domain name like www.007FFFLearning.com\nLuckily, this is a nifty feature from Static Web Apps, where it allows you to add a custom domain, together with a public SSL/TLS certificate for encryption - all included in the FREE plan! Sweetly done Microsoft!\nFrom the Static Web Apps blade, navigate to custom domains. Click \u0026lsquo;Add Domain\u0026rsquo;, and select the options that\u0026rsquo;s relevant to you. I have my public domain in GoDaddy, but other options, including Azure DNS itself, is also available. Add the custom domain name, and copy the CName record details over into your actual DNS hosting solution management portal. Once this is done, head back over to this Azure Custom Domain blade and confirm the domain validation. Note - depending on the DNS provider of use, this might take up to several hours. Mostly, this will only be a few minutes though. That\u0026rsquo;s it! from now on, your Static Web Site will listen to both the internal SWA domain, as well as the public domain you have configured here. This is all it took to migrate my Hugo blog site from Azure Storage Account Static Site to the newer, Azure Static Web Apps. I\u0026rsquo;m now going to delete my old Resource Group, since I don\u0026rsquo;t need that Azure Storage Account nor the Azure Front Door anymore. saving me about $45 /month.\nIn the next post, I\u0026rsquo;ll describe how to add Azure Application Insights to it, to continue getting visitor statistics.\nHoller me on Twitter if you should have any questions.\nCheers!!\n/Peter\n","date":"2022-10-30T00:00:00Z","permalink":"/post/deploying-or-migrating-a-hugo-blog-on-azure-static-web-apps/","title":"How I migrated my Hugo site from Azure Storage Site to Azure Static Web Apps"},{"content":"Hey folks,\nEarlier this week, I wrote about how I migrated my Hugo blog site from Azure Storage Account-based site to the newer Azure Static Web Apps with Hugo.\nWhile this was a smooth process, both migrating the actual site content, as well as transferring the public domain name, the piece missing was the statistics. I always used Azure Application Insights for this, but specifically for Azure Static Web Apps, App Insights is only supported when using Functions (as per this article on the Microsoft docs). Which I don\u0026rsquo;t have with Hugo.\nHowever, App Insights also supports a JavaScript-based approach, and this works fine with Hugo static website.\nLet\u0026rsquo;s get this going\u0026hellip;\nThe first step involves deploying an Azure Application Insights resource from the portal. Enter the necessary details to get your App Insights resource deployed: Subscription Resource Group App Insights Instance Name Region Resource-Mode Workspace-based Log Analytics WorkSpace: accept the suggested one (or select an existing one if you already have one and want to consolidate the logging information) After a few minutes, the resource got created successfully. Navigate to the blade From the blade, notice the Instrumentation Key in the top right corner. Copy this key aside, as you need to add it into the Hugo config file. With App Insights up-and-running, let\u0026rsquo;s head over to our Hugo site source files. Look for a file \u0026ldquo;config.toml\u0026rdquo; in the root of your Hugo folder structure. Open the file in an editor, and add the following snippet into the \u0026ldquo;[param]\u0026rdquo; section of the config file:\n1 azureAppInsightsKey = \u0026#34;4ecca3df-ab58-4882-aaaa-123456789\u0026#34; replacing the sample key with the Instrumentation Key of your Azure Application Insights resource you copied earlier.\nNext, to make sure your App Insights statistics get captured for every visit of every page of the site, add a little snippet of code for appinsights to the top section of the baseof.html file, which should be in the \\themes\u0026lt;theme\u0026gt;\\layouts_default\\ folder of the Hugo Theme you are using. 1 2 3 4 5 6 7 8 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;{{ .Site.LanguageCode }}\u0026#34;\u0026gt; {{ partial \u0026#34;appinsights.html\u0026#34; . }} \u0026lt;========= add this line {{ partial \u0026#34;head.html\u0026#34; . }} {{ partial \u0026#34;nav.html\u0026#34; . }} \u0026lt;!-- Page Header --\u0026gt; {{ block \u0026#34;header\u0026#34; .}} ... Next, create a new file called appinsights.html in the \\themes\u0026lt;theme\u0026gt;\\layouts\\partials\\ folder of the Hugo Theme you are using, having the following code in it: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 {{ if .Site.Params.azureAppInsightsKey }} \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; !function(T,l,y){var S=T.location,u=\u0026#34;script\u0026#34;,k=\u0026#34;instrumentationKey\u0026#34;,D=\u0026#34;ingestionendpoint\u0026#34;,C=\u0026#34;disableExceptionTracking\u0026#34;,E=\u0026#34;ai.device.\u0026#34;,I=\u0026#34;toLowerCase\u0026#34;,b=\u0026#34;crossOrigin\u0026#34;,w=\u0026#34;POST\u0026#34;,e=\u0026#34;appInsightsSDK\u0026#34;,t=y.name||\u0026#34;appInsights\u0026#34;;(y.name||T[e])\u0026amp;\u0026amp;(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:\u0026#34;4\u0026#34;,version:2,config:d};function v(e,t){var n={},a=\u0026#34;Browser\u0026#34;;return n[E+\u0026#34;id\u0026#34;]=a[I](),n[E+\u0026#34;type\u0026#34;]=a,n[\u0026#34;ai.operation.name\u0026#34;]=S\u0026amp;\u0026amp;S.pathname||\u0026#34;_unknown_\u0026#34;,n[\u0026#34;ai.internal.sdkVersion\u0026#34;]=\u0026#34;javascript:snippet_\u0026#34;+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=\u0026#34;\u0026#34;+e;return 1===t.length\u0026amp;\u0026amp;(t=\u0026#34;0\u0026#34;+t),t}return e.getUTCFullYear()+\u0026#34;-\u0026#34;+t(1+e.getUTCMonth())+\u0026#34;-\u0026#34;+t(e.getUTCDate())+\u0026#34;T\u0026#34;+t(e.getUTCHours())+\u0026#34;:\u0026#34;+t(e.getUTCMinutes())+\u0026#34;:\u0026#34;+t(e.getUTCSeconds())+\u0026#34;.\u0026#34;+((e.getUTCMilliseconds()/1e3).toFixed(3)+\u0026#34;\u0026#34;).slice(2,5)+\u0026#34;Z\u0026#34;}(),iKey:e,name:\u0026#34;Microsoft.ApplicationInsights.\u0026#34;+e.replace(/-/g,\u0026#34;\u0026#34;)+\u0026#34;.\u0026#34;+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,p,l,u;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(\u0026#34;;\u0026#34;),a=0;a\u0026lt;n.length;a++){var i=n[a].split(\u0026#34;=\u0026#34;);2===i.length\u0026amp;\u0026amp;(e[i[0][I]()]=i[1])}if(!e[D]){var r=e.endpointsuffix,o=r?e.location:null;e[D]=\u0026#34;https://\u0026#34;+(o?o+\u0026#34;.\u0026#34;:\u0026#34;\u0026#34;)+\u0026#34;dc.\u0026#34;+(r||\u0026#34;services.visualstudio.com\u0026#34;)}return e}(),c=s[k]||d[k]||\u0026#34;\u0026#34;,p=s[D],l=p?p+\u0026#34;/v2/track\u0026#34;:config.endpointUrl,(u=[]).push((n=\u0026#34;SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)\u0026#34;,a=t,i=l,(o=(r=v(c,\u0026#34;Exception\u0026#34;)).data).baseType=\u0026#34;ExceptionData\u0026#34;,o.baseData.exceptions=[{typeName:\u0026#34;SDKLoadFailed\u0026#34;,message:n.replace(/\\./g,\u0026#34;-\u0026#34;),hasFullStack:!1,stack:n+\u0026#34;\\nSnippet failed to load [\u0026#34;+a+\u0026#34;] -- Telemetry is disabled\\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\\nHost: \u0026#34;+(S\u0026amp;\u0026amp;S.pathname||\u0026#34;_unknown_\u0026#34;)+\u0026#34;\\nEndpoint: \u0026#34;+i,parsedStack:[]}],r)),u.push(function(e,t,n,a){var i=v(c,\u0026#34;Message\u0026#34;),r=i.data;r.baseType=\u0026#34;MessageData\u0026#34;;var o=r.baseData;return o.message=\u0026#39;AI (Internal): 99 message:\u0026#34;\u0026#39;+(\u0026#34;SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) (\u0026#34;+n+\u0026#34;)\u0026#34;).replace(/\\\u0026#34;/g,\u0026#34;\u0026#34;)+\u0026#39;\u0026#34;\u0026#39;,o.properties={endpoint:a},i}(0,0,t,l)),function(e,t){if(JSON){var n=T.fetch;if(n\u0026amp;\u0026amp;!y.useXhr)n(t,{method:w,body:JSON.stringify(e),mode:\u0026#34;cors\u0026#34;});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(w,t),a.setRequestHeader(\u0026#34;Content-type\u0026#34;,\u0026#34;application/json\u0026#34;),a.send(JSON.stringify(e))}}}(u,l))}function i(e,t){f||setTimeout(function(){!t\u0026amp;\u0026amp;m.core||a()},500)}var e=function(){var n=l.createElement(u);n.src=h;var e=y[b];return!e\u0026amp;\u0026amp;\u0026#34;\u0026#34;!==e||\u0026#34;undefined\u0026#34;==n[b]||(n[b]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){\u0026#34;loaded\u0026#34;!==n.readyState\u0026amp;\u0026amp;\u0026#34;complete\u0026#34;!==n.readyState||i(0,t)},n}();y.ld\u0026lt;0?l.getElementsByTagName(\u0026#34;head\u0026#34;)[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(u)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n=\u0026#34;track\u0026#34;,r=\u0026#34;TrackPage\u0026#34;,o=\u0026#34;TrackEvent\u0026#34;;t([n+\u0026#34;Event\u0026#34;,n+\u0026#34;PageView\u0026#34;,n+\u0026#34;Exception\u0026#34;,n+\u0026#34;Trace\u0026#34;,n+\u0026#34;DependencyData\u0026#34;,n+\u0026#34;Metric\u0026#34;,n+\u0026#34;PageViewPerformance\u0026#34;,\u0026#34;start\u0026#34;+r,\u0026#34;stop\u0026#34;+r,\u0026#34;start\u0026#34;+o,\u0026#34;stop\u0026#34;+o,\u0026#34;addTelemetryInitializer\u0026#34;,\u0026#34;setAuthenticatedUserContext\u0026#34;,\u0026#34;clearAuthenticatedUserContext\u0026#34;,\u0026#34;flush\u0026#34;]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[C]\u0026amp;\u0026amp;!0!==s[C]){method=\u0026#34;onerror\u0026#34;,t([\u0026#34;_\u0026#34;+method]);var c=T[method];T[method]=function(e,t,n,a,i){var r=c\u0026amp;\u0026amp;c(e,t,n,a,i);return!0!==r\u0026amp;\u0026amp;m[\u0026#34;_\u0026#34;+method]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);(T[t]=n).queue\u0026amp;\u0026amp;0===n.queue.length\u0026amp;\u0026amp;n.trackPageView({})}(window,document,{ src: \u0026#34;https://az416426.vo.msecnd.net/scripts/b/ai.2.min.js\u0026#34;, // The SDK URL Source //name: \u0026#34;appInsights\u0026#34;, // Global SDK Instance name defaults to \u0026#34;appInsights\u0026#34; when not supplied //ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout, //useXhr: 1, // Use XHR instead of fetch to report failures (if available), //crossOrigin: \u0026#34;anonymous\u0026#34;, // When supplied this will add the provided value as the cross origin attribute on the script tag cfg: { // Application Insights Configuration instrumentationKey: \u0026#34;{{- .Site.Params.azureAppInsightsKey -}}\u0026#34; /* ...Other Configuration Options... */ }}); \u0026lt;/script\u0026gt; {{ end }} Save the files and wait for the Static Web Apps pipeline to complete the update successfully.\nNavigate to your blog website, and open a few different articles, shortlinks to other parts in the web site or navigate back-and-forth to the home page. This to generate some statistics information.\nAfter only a few minutes, your App Insights data will get loaded, which can be retrieved from App Insights / Usage / section, using different views:\nFor example, select Users, which shows the number of unique visitors over the last 24 hours (note you can drill down to the last 30min, up to any custom period in time).\nClick on the View More Insights button below the chart, which will expose even more granular information regarding the visits. For example the location, time frame, client, browser version, etc\u0026hellip; all the way to the full sequence of blog articles visited.\nIn this article, I explained how to integrate Azure App Insights into a Hugo-based Azure Static Web Apps, using some JavaScript and HTML code.\nIf you are running Hugo on Azure SWA as well, let me know!\nCheers!!\n/Peter\n","date":"2022-10-30T00:00:00Z","permalink":"/post/integrating-azure-app-insights-for-hugo-on-static-web-apps/","title":"Integrating Azure App Insights for Hugo on Static Web Apps"},{"content":"As most of you know already, I enjoy writing technical (Azure related) books. So if you are wondering why it\u0026rsquo;s been quiet lately, there is a good reason for it.\nActually, I was writing on my 9th book, making it 9 books in 9 years straight, but something got in-between the writing process and publishing. It\u0026rsquo;s called US Visa regulations :). Beginning this year, January 5th actually, I relocated from Belgium to Redmond, WA for Microsoft Corp. The work Visa I\u0026rsquo;m on allows me to only work for Microsoft, which obviously makes sense. But that meant I had to stop my side-activity as a book author with Apress.\nKnowing I was already halfway in the actual writing process, so spent about 6 months from ideation to realization, I didn\u0026rsquo;t just want to scrap the chapters I already had. My MTT colleague and good friend Unai Huete Beloki, fellow-Azure DevOps trainer in my former EMEA team at Microsoft, was willing to step in and take over the writing process.\nHe did an amazing job, actually revamping some of the chapters I already had, moving the outline (=chapter order) around a little bit, and added a huge amount of his own views and experiences into the material. Honestly, only about 20% of my original writing was left. So it really became HIS book, with just a little bit of me left in. Obviously, I still offered to do Technical Reviewing (which gets paid with a free printed copy of the book).\nLet\u0026rsquo;s check out what the book is about\u0026hellip;\nIn short, based on the title, it covers a lot of the best practices on how to architect, build and run Azure workloads, but adding a lot of DevOps automation into doing this. Which summarizes what Site Reliability Engineering is about. As an SRE engineer, you spend about 50% of your time on developing, and 50% on doing engineering work. Which could involve designing new workload architectures, monitoring and observing actual-running workloads, automating the deployments using template-based Infrastructure as Code, as well as DevOps CI/CD Pipelines.\nBy the end of this book, you\u0026rsquo;ll have gained the confidence to design high-available and reliable Azure solutions, based on Microsoft Azure Reference Architectures, Azure DevOps and GitHub guidelines, and understanding more about the role of Site Reliability Engineering and designing for reliability and resiliency.\nTable of Contents Chapter 1: The Foundation of Site Reliability Engineering Chapter 2: Service-Level Management Definitions and Acronyms Chapter 3: Azure Well-Architected Framework (WAF) Chapter 4: Architecting Resilient Solutions in Azure Chapter 5: Automation to Enable SRE with GitHub Actions/Azure DevOps/Azure Automation Chapter 6: Monitoring As the Key to Knowledge Chapter 7: Efficiently Handle Incident Response and Blameless Postmortems Chapter 8: Azure Chaos Studio (Preview) and Azure Load Testing (Preview) Good for +250 pages of deep-technical content on Azure, DevOps and SRE practices!\nFeel free to reach out if you got any more questions on this book or its content. Unfortunately I don\u0026rsquo;t have access to discount codes or free copies, if that would be your first ask :). The book is available on Amazon as printed copy, downloadable PDF/Epub and on Kindle as well as on Apress/Springer own catalog.\nI hope you enjoy reading this work, learn from it, and become better in your role as DevOps/SRE engineer.\nCheers!!\n/Peter\n","date":"2022-10-29T00:00:00Z","permalink":"/post/the-art-of-realizing-sre-on-azure/","title":"The Art of Realizing SRE on Azure - Book Review"},{"content":"\nHey awesome people,\nFor the ones who know me, it shouldn\u0026rsquo;t be a surprise I\u0026rsquo;m interested in DevOps, mainly using Azure DevOps and GitHub as core technologies, as well as several side-solutions that integrate with them. So when I heard about the DevOps Workflow Generator, a new free tool from the Microsoft Research Lab division, I wanted to give it a spin.\nThe Concepts of DevOps DevOps according to Microsoftâ€™s Definition: The Union of People, Processes and Products, to enable continuous delivery of value to the business\nThe tricky part with DevOps is that it\u0026rsquo;s not just about 1 team using a single tool, but potentially a complex group of people (the DevOps engineering team), using a multitude of tools and solutions to perform their role. Obviously the main DevOps pipeline engine (Azure Pipelines, GitHub Actions, Octopus Deploy, Jenkins, GitLab, etc\u0026hellip;), but most probably, it also involves Infrastructure as Code tools such as Terraform, Azure Bicep or ARM Templates, Configuration as Code tools like PowerShell DSC, Ansible, Chef or Puppet, as well as DevSecOps tools where Snyk, Aqua, SonarQube, WhiteSource bolt and Veracode are just some of the popular ones. (Btw, if you missed my recent post on integrating DevSecOps by Shifting Left, which I wrote for Azure Spring Clean, you can find it here)\nWhen to use what If you went to the Azure Spring Clean post I referred to earlier, you now know that DevOps is relying on a lot of tools. I\u0026rsquo;m not saying it\u0026rsquo;s the most important aspect, but without tools, there is no DevOps. Period.\nSo coming back to the DevOps Workflow Generator, it literally helps organizations, and their DevOps teams, to get a clearer view on what the DevOps process is about, as well as what different tools are being used in each and every phase. You might ask, what\u0026rsquo;s the point in that.\nWell, let me tell you. Since DevOps is more about the culture than the tools, the better view your team has on what tools are being used, the better the team will operate. Apart from bringing - what\u0026rsquo;s traditionally 2 separate worlds - Developers and Operations teams together, there might actually be a sub-project as part of the DevOps adoption, to try and unify the tooling that is being used. Let\u0026rsquo;s say Developers are OK with using Visual Studio, or maybe Visual Studio Code. Where Operations folks are probably also using Visual Studio Code, in favor of Visual Studio. This might lead to an agreement that from now on, Visual Studio Code is a standard tool across the DevOps team. Maybe even sharing extensions (ARM Templates, Docker, Kubernetes, Bicep, Azure,\u0026hellip;) are just some of my favorites.\nHow to use the DevOps Workflow Generator You most probably don\u0026rsquo;t need my help from this blog post to find out how the Workflow Generator is working, it\u0026rsquo;s really that easy to use. However, after a first quick look, I went back (in preparation to write this article) and actually discovered some new things. So hopefully there is still something useful for you to discover:\nBrowse to https://devopsworkflowgenerator.research.microsoft.com/ From the top menu, select Map Workflow This presents you with a rather generic/standard DevOps workflow process; which at first, was what I used. Until I discovered you can actually make customizations to it. Following my own best practices - Shifting Left - I added some of the process steps as outlined in the referred article earlier. The updated workflow looks like this now:\nNext, move over the Select Tools step in the top menu. This allows you to select DevOps solutions and tools (single or multiple) for each cycle in your DevOps Workflow. And the list is extensive\u0026hellip;! Once all tools have been mapped with each phase, it\u0026rsquo;s time to compile a report, by navigating to the Download Report menu option. The outcome presents a nice-looking PDF document, which looks like this:\nPretty cool, right?\nSummary In this article, I wanted to introduce you to DevOps Workflow Generator , a free tool by Microsoft Research Labs, allowing DevOps teams to get a better view on their DevOps process(es), as well as highlighting the different solutions and tools used for each phase of the DevOps process.\nHave a look at it, and let me know your thoughts!\nIf you liked this article, consider giving back a small token of appreciation: Peter\n","date":"2022-04-03T00:00:00Z","permalink":"/post/devops-workflow-generator/","title":"DevOps Workflow Generator"},{"content":"\nHey folks,\nWelcome to #AzureSpringClean, an initiative from Joe Carlyle and Thomas Thornton which celebrates its 3rd edition this year. I\u0026rsquo;m thrilled to be part of this again for the 2nd time this year. My first article had security in mind, explaining the difference between Azure Service Principals and Managed Identity.\nFor this second article, I\u0026rsquo;m staying in the security focus, helping you understand DevSecOps, and how you can optimize security in your application deployment lifecycle, by \u0026ldquo;shifting left\u0026rdquo;.\nI hope you learn from it, enjoy reading through and got inspired to check back the whole week here at Azure Spring Clean, as there are A TON of great topics that will be covered.\nThe Concepts of DevOps Microsoftâ€™s Definition: The Union of People, Processes and Products, to enable continuous delivery of value to our end users\nWhat DevOps teams are doing is providing an automated deployment process, starting from:\na) checking in their application source code into a source control system, such as Azure DevOps Repos, GitHub Repositories or similar; b) Once the code has been checked in, it loops through a functional testing process; c) This forms the starting point of Continuous Integration (CI), where the code gets compiled into an artifact or deployable package; d) From here, the package typically gets deployed into a running state, known as Continuous Deployment (CD); this could be to a dev/test, staging or production environment; e) Once the application workload is published, there\u0026rsquo;s a handover to the Operations team, who integrate monitoring, watch over incidents and fix the problem.\nThis (somewhat simplified) process gets repeated over and over (CI/CD pipeline automation), and should lead to faster deployments, less bug fixes needed and reliable workloads.\nIf we look at this process from a linear perspective, it would look like this:\nSo now you know what DevOps stands for, letâ€™s zoom in more on the DevSecOps extension\nShifting Left When you look at this application lifecycle overview, most of the security handling, but also vulnerabiltiies, are getting detected and handled during the running phase. Thatâ€™s where we have DDOS attacks, identity or credential theft, networking attacks, crypto and similar malware etcâ€¦\nWhile thereâ€™s nothing wrong in handling security all the way at the end, it should not be ONLY all the way at the end, but rather be moved all the way to the beginning; and becoming part of each and every stage in our DevOps process\nThatâ€™s what the industry calls â€œshifting leftâ€\nSo what are some of the security best practices DevOps teams can (easily) integrate into their DevSecOps process you ask? Actually there\u0026rsquo;s a lot of different options and possibilities. Good news is, you can apply different security features in the different DevOps stages:\nIâ€™ll drill down a bit more on these obviously, but letâ€™s look at some examples:\nDEV\nThreat Modeling - Microsoft provides a free Threat Modeling Tool, which helps you outlining the potential threats and vulnerabilities for a generic application architecture;\nCredentials \u0026amp; Secrets Management - NEVER store your secrets and credentials hard-coded into your source control system. That\u0026rsquo;s just a NO GO!! Rather, look into secret variables, variable groups or even better, a secret store such as Azure Key Vault or GitHub Secrets\nPeer Review\nVALIDATE\nCode Analysis - this is where you integrate source control code scanning tools such as Snyk, Sonar, WhiteSource Bolt and many others. The easiest way is going through them in your own DevOps pipelines. If you are using Azure DevOps, have a look at the Azure DevOps Marketplace, and search for security At present, there are 91 different extensions available to integrate security into your DevOps Organization and Projects. PACKAGE\nSecured Containers - Containers are becoming more and more popular to speed up the development process, simplifying the dependency on a platform and overall allowing for standardization thanks to Docker images. While containers are bringing in a lot of good things into a developer\u0026rsquo;s scenario, they also might bring in vulnerabiities and security risks, as you don\u0026rsquo;t always know where the image comes from, what\u0026rsquo;s running inside the container or who built it. That\u0026rsquo;s where I can definitely recommend a vulnerability scanner for containers. Tools such as Aqua or Twistlock are popular reference here. If you are using Azure as your container runtime environment, know that you can enable Microsoft Defender for COntainers, which provides a built-in security vulnerabilty for your containers stored in Azure Container Registry.\nQuality Gates\nCloud Configuration\nSecurity \u0026amp; Pen-testing\nRUN\nCloud Platform Security - Any cloud vendor provides robust security features as part of the platform. Azure comes with Azure Security Center for years, which recently got rebranded to [Microsoft Defender for Cloud](https://docs.microsoft.com/en-us/azure/defender-for-cloud/defender-for-cloud-introduction). Core characteristics are detecting your Secure Score, a value for your security posture, together with an extensive list of recommendations, on how to optimize your security. RBAC permissions model - This corresponds to the concept of \u0026ldquo;least set of privileges\u0026rdquo;, which means your DevOps teams should only get the administrative permissions they really need to do there job, but no more. Or even better, only give them administrative permissions when they need to perform admin tasks. Services such as Azure Identity Protection and Privileged Identity Management are no luxury in any organization. Keep in mind these require an Azure AD Premium P2 license. Which - if you ask me - is more than worth the extra cost! Credentials \u0026amp; Secret Management OPERATE\nSecurity Monitoring Threat Detection - This brings us back to the \u0026ldquo;original\u0026rdquo; approach, having the necessary security guardrails in place to protect our runtime environments. SIEM solutions such as Azure Sentinel Mitigation Summary In this post, I gave you an overview of the typical DevOps process, and what challenges exist around security. Often coming in all the way at the end (during the operations cycle), security should be part of each and every step of the DevOps concept, preferably as early in the process as possible. This is what the industry calls shifting left. I tried to share some \u0026ldquo;easy to implement solutions\u0026rdquo; to optimize security, by sharing several tools and services available today in Azure, Azure DevOps and GitHub. If you should have any questions on this, or you want to see a demo on what the tools can do, I\u0026rsquo;m only a nudge away ;)\nOnce more, I very appreciated thank you for reading, and for Joe Carlyle and Thomas Thornton for having accepted my submission for this 2022 #AzureSpringClean edition. Enjoy your Spring Clean week, stay safe and healthy!\nPeter\n","date":"2022-03-17T00:00:00Z","permalink":"/post/azure-spring-clean---devsecops-and-shifting-left-to-publish-secure-software/","title":"Azure Spring Clean - DevSecOps and Shifting Left"},{"content":"\nHey friends,\nWelcome to #AzureSpringClean, an initiative from Joe Carlyle and Thomas Thornton which celebrates its 3rd edition this year. I\u0026rsquo;m thrilled to be part of this again as well, helping you understanding the confusion and difference between Azure Service Principals and Azure Managed Identities. As I recently relocated from Belgium to Redmond, and didn\u0026rsquo;t have all my video/audio equipment up for a recording, I decided to share this information in a blog post.\nI hope you learn from it, enjoy reading through and got inspired to check back the whole week here at Azure Spring Clean, as there are A TON of great topics that will be covered.\nIn this article, I want to clarify one of the more confusing concepts in Azure and more specifically around the Azure Identity objects known as Service Principals and Managed Identities.\nIn essence, those objects are not really different from the concept of a traditional (Azure) Identity object, which are available in Azure Active Directory already.\nAzure Active Directory Azure AD is the Microsoft Azure cloud trusted Identity Object store, in which you create different Identity Object types. The most common ones are Users and Groups, but you can also have Applications in there, also known as Enterprise Apps.\nAn example for each could be:\nUsers: this is where you create regular user accounts, allowing them to authenticate to the Azure Portal, to start using Office 365,â€¦ Groups: you define a security group in Azure AD, reflecting a group of users such as \u0026ldquo;DevOps team\u0026rdquo; Enterprise Apps: using OpenIDConnect and OAuth, you allow a cloud-based application to trust your Azure AD for user authentication; the trusting app is known as an enterprise app object in Azure AD. With that out of the way, let\u0026rsquo;s focus on the main topic of the article, detailing what a Service Principal is about:\nService Principal Most relevant to Service Principal, is the Enterprise apps; according to the formal definition, a service principal is â€œâ€¦An application whose tokens can be used to authenticate and grant access to specific Azure resources from a user-app, service or automation tool, when an organization is using Azure Active Directoryâ€¦â€\nBy using a Service Principal, you create an Identity object, which gets linked to an application or a service. This corresponds to the on-premises concept we have in Active Directory called \u0026ldquo;service account\u0026rdquo;, where you would create a SQL Server, Backup Software or any other application user, which would be used to \u0026ldquo;run\u0026rdquo; the application.\nAnother important aspect, since this Service Principal is nothing more but an identity object in Azure AD, you can also restrict the permissions of what this SP can do, by leveraging on Azure RBAC roles . If you want your 3rd party application to only be able to communicate with a specific Azure subscription within your Tenant, or to only update a given Resource Group, that\u0026rsquo;s what RBAC will control.\nTypical use cases where you would rely on a Service Principal is for example when running Terraform IAC (Infrastructure as Code) deployments, or when using Azure DevOps, or technically any other 3rd party application requiring an authentication token to connect to Azure resources.\nAn Azure Service Principal can be created using \u0026ldquo;any\u0026rdquo; traditional way like the Azure Portal, Azure PowerShell, Rest API or Azure CLI. Let me show you the command syntax out of Azure CLI to achieve this:\n1 az ad sp create-for-rbac --name \u0026#34;azurespringclean\u0026#34; resulting in this outcome:\nCopy this information aside; in the example of an Azure DevOps Service Connection, this information would be used as follows:\nwhere you just need to copy the correct information in the corresponding parameter fields. Or - since I used Terraform as another example - you would need to provide these details as part of your terraform.tf deployment file, or as a terraform.tfvars variable file), where the syntax would be the following:\n1 2 3 4 5 6 Terraform subscription service principal vars subscription_id = \u0026#34;0a407898-c077-442d-xxxx-xxxxxxxxxxxx\u0026#34; client_id = \u0026#34;3723bfcc-f0ba-4bba-xxxx-xxxxxxxxxxxx\u0026#34; client_secret = \u0026#34;b9eab5cb-c1b0-46e6-xxxx-xxxxxxxxxxxx\u0026#34; tenant_id = \u0026#34;70681eb4-8dbc-4dc2-xxxx-xxxxxxxxxxxx\u0026#34; NOTE: Keep in mind you are SHARING CREDENTIALS HERE, so depending on the actual application consuming the Service Principal, you need to verify if it is capable of handling these in a secured way. Using the Azure DevOps Service Connection example, that\u0026rsquo;s totally fine, as ADO encrypts these settings. In the Terraform scenario however, these are stored clear-text in the Terraform deployment script file. WHICH IS A NO GO!! So where possible, try to store the Service Principal credentials in a safe way, like using Azure Key Vault, Terraform Vault,\u0026hellip; instead of a clear-text textfile\nApart from the 2 examples I shared, the concept would be the same for about any other 3rd party application you want to have communicating with Azure in this way. However, I noticed that the technical parameter field names sometimes differ a bit from what the Azure CLI command provides as output.\nService Principals are great from a security perspective, if you manage them correctly. It should be clear by now (you read this just a paragraph ago\u0026hellip;), there are also some challenges in using Service Principals:\nFirst, an admin needs to create the Service Principal objects, Client ID and Secret are exposed / known to the creator of the Service Principal, Client ID and Secret are exposed / known to the consumer of the Service Principal, Lifetime of the Service Principal is max. 2 years Luckily, the story is not complete yet, as that\u0026rsquo;s where we bring in Managed Identities:\nManaged Identities Managed Identities are in essence 100% identical in functionality and use case than Service Principals. In fact, they are nothing different from Service Principals, although the way you create and manage them are slightly different. In a good way!\nThey are always linked to an Azure Resource, not to an application or 3rd party connector They are automatically created for you, including the credentials; biggest security advantage is that nobody besides Azure AD itself knows the credentials Managed Identities exist in 2 different flavors:\nSystem assigned; in this scenario, the identity is linked to a single Azure Resource, eg a Virtual Machine, a Logic App, a Storage Account, Web App, Function,â€¦ so almost anything, and there is a 1:1 relationship between the Azure Resource and the corresponding Managed Identity. If you delete the Azure Resource, the MI also gets deleted. Which would be a security benefit.\nUser Assigned; In this scenario, an admin user creates a stand-alone Managed Identity object (but no secrets or credentials are exposed here like you saw when creating a Service Principal). Next, you can \u0026ldquo;link\u0026rdquo; the User Assigned MI to multiple Azure Resources. A typical example here is a web server farm, who all need to connect to the same Azure Storage Account. Instead of creating 50 System Assigned MI\u0026rsquo;s for each Virtual Machine, you would need to create only 1 and linking it to all 50 VMs. Interesting enough, there are debates going on which of these scenarios would be the most secure, having a single one or multiple ones. I would say it depends on the requirements of your environment.\nLet\u0026rsquo;s close this post with a practical demo scenario, in which we integrate a Virtual Machine Managed Identity to interact with Azure Key Vault:\n(Prerequisite is having an Azure Virtual Machine and Azure Key Vault Resource deployed in your subscription)\nFrom the Azure Portal, select your deployed Virtual Machine; navigate to settings, Identity and switch its status to On, and save the changes. Next, navigate to your Azure Key Vault resource, select Access Policies, followed by configuring this System Assigned Managed Identity having get and list permissions (or any other) for keys, secrets or certificates. Know that you can specify the permissions on the secret-types, but not all the way down to individual secret objects (meaning, if you have multiple secrets or keys in KV, this Managed Identity would be able to use all of them)\nNotice how Azure Key Vault is expecting a Service Principal object here (where in reality we are using a Managed Identity).\nSimilarly, let\u0026rsquo;s remove the System Assigned MI of the VM and use a User Assigned one in the next example (an Azure Resource can only be linked to one or the other, not both\u0026hellip;):\nFrom the Azure Virtual Machine blade settings, switch back to Identity and turn Off the System Assigned configuration. This will prompt for your confirmation when saving the settings At this time, the System Assigned Managed Identity is already gone from Azure AD. Wait for the deregistration of the object. Before we can use the User Assigned Managed Identity, we first need to create it. This can be done as follows:\nFrom the Azure Portal, select Create new Resource, type \u0026ldquo;User Assigned Managed Identity\u0026rdquo; in the search field click Create. Specify the Resource Group, Azure Region and Name for this resource. Confirm the creation and wait for it to be completed.\nOnce created, switch back to the Azure Virtual Machine, select Identity and this time, make sure you choose User Assigned\nRecognize the Managed Identity you just created.\nSelect it and add it as a Virtual Machine User Assigned object.\nIf you have another Azure Resource available in your, for example another Virtual Machine, or an Azure Web App, a FUnction,\u0026hellip; and once more selecting Identity from that resource\u0026rsquo;s settings pane, you will see you can reuse the same Managed Identity as what got already linked to the initial Virtual Machine. Below screenshot shows what it looks like for an Azure Web App Resource:\nTo finish the foreseen scenario, let\u0026rsquo;s go back to Azure Key Vault, and specify another Access Policy for this User Assigned Managed Identity:\nSelect your Azure Key Vault resource, followed by selecting Access Policy from the settings. Specify the Key and/or Secret Permissions (for example get, list) Click \u0026ldquo;Select Principal\u0026rdquo; and search for the User Assigned Managed Identity you created earlier After saving the changes, the result is that now both the Azure Virtual Machine as well as the Web App - having the User Assigned Managed Identity assigned to them - can read our keys and secrets from Azure Key Vault.\nWhat you learned In this post, I wanted to clarify the use case, difference and similarities between Service Principals and Managed Identities. Both are Azure Identity object, allowing for a secure interaction between 3rd party applications and Azure, or within Azure Resources directly. Depending on the use case, you would use one or the other. If you want to get started with Azure, or want to read more in the official Microsoft docs on the subject, follow the below links:\nCreate your Azure Trial subscription from this link: Additional reading material on Service Principals Additional reading material on Managed Identities**. Once more, I very appreciated thank you for reading, and for Joe Carlyle and Thomas Thornton for having accepted my submission for this 2022 #AzureSpringClean edition. Enjoy your Spring Clean week, stay safe and healthy!\nPeter\n","date":"2022-03-14T00:00:00Z","permalink":"/post/azure-spring-clean---demystifying-service-principal-and-managed-identities/","title":"Azure Spring Clean - Service Principals - Managed Identities"},{"content":"Hello readers,\nAbout 2 months ago, I promised I would start writing down my adventures in the DotNet Blazor development world, which you can read about in my first and second post.\nWelcome to \u0026ldquo;Coding Apps in Blazor from a non-developer standpoint - Part 3\u0026rdquo;\nThis next article will cover about the same as the previous one, deploying the Blazor Server app template, but instead of using Visual Studio GUI for this, I\u0026rsquo;ll use the dotnet commandline tool this time. As it comes with some cool enhancements and options. And, next to that, it\u0026rsquo;s always nice to rely on command shell to speed up certain tasks.\nPrerequisites To make sure you are ready to go and follow-along, let me list up some prereqs:\nThe only prereq for the actual creation part, would be the .NET RunTime:\n.NET RunTime In order to run C# and .NET applications, one needs to have the necessary .NET RunTime installed on the development workstation. In a later article, I\u0026rsquo;ll describe how you can publish Blazor apps to Azure App Services or Containerized workloads, where you will notice the .NET RunTime is required as well. If you are running Visual Studio 2019, install the .NET 5.0 RunTime; if, like me, you are running Visual Studio 2022 Preview, you can directly go for .NET 6.0 (I\u0026rsquo;m running Windows 11, Visual Studio 2022 Preview 7.0, which means it could look a bit different on your machine, although most steps will be identical\u0026hellip;)\nHowever, what\u0026rsquo;s the point in creating an application placeholder folder, if you are not customizing and developing, right? Which means you still need a Developer Interface for the actual coding. Nice thing is, that you are not limited to Visual Studio, but could also use Visual Studio Code, JetBrains Rider, or basically any other IDE you prefer.\nVisual Studio IDE Any flavor of Visual Studio 2019 or later should work (know that 2022 is getting launched Nov 8th\u0026hellip;), and depending on your situation, you might already have access to a licensed edition of Standard, Professional or Enterprise from your employer. If not, totally fine, as there is also a free Community Edition available from the link I shared\nVisual Studio Code VS Code is a \u0026ldquo;lightweight\u0026rdquo;, yet superpowerful source code editor which runs on your desktop and is available for Windows, macOS and Linux. It comes with built-in support for JavaScript, TypeScript and Node.js and has a rich ecosystem of extensions for other languages (such as C++, C#, Java, Python, PHP, Go) and runtimes (such as .NET and Unity). Common extensions I\u0026rsquo;ve been using since day 1 are Azure App Services, ARM Template tools, Docker Containers and Kubernetes. And because of the built-in support for C# and .NET overall, it\u0026rsquo;s a perfect target for developing Blazor applications.\nUsing .NET commandline to Create a Blazor Web Assembly App Assuming you have all the prereqs covered, you can create your first Blazor Web Assembly App by going through the following steps:\nOpen your preferred commandline Shell (Command Prompt, Windows Terminal or PowerShell) and validate the dotnet version by initiating the following command: 1 dotnet --version In my case, I\u0026rsquo;m running the .NET 6.0 RC2 Preview, which should shift to a Release version later today :)\nNext, create a subfolder for your Blazor Application, by initating the \u0026ldquo;MD\u0026rdquo; (Make Directory on a Windows Machine) command, and \u0026ldquo;CD\u0026rdquo; (Change Directory) to navigate to the subfolder 1 md dotnetblazordemo followed by\n1 cd dotnetblazordemo Next, pull up the Blazor templates by initiating the following command: 1 dotnet new --list Blazor As you can see, there is both a template for Blazor Server and Blazor Web Assembly; As I showed you how to deploy a Blazor Server App in the Visual Studio GUI post, let\u0026rsquo;s deploy a Blazor Web Assembly alternative this time. Remember, Web Assembly is a browser capability, allowing you to run full .NET code directly in the browser, without requiring a server-backend. For more details, check back on my Blazor introductory article in which I positioned the different Blazor versions and their characteristics. Inititate the following command:\n1 dotnet new blazorwasm --hosted Which nicely creates all necessary components for our Blazor App, containing the Client (Front-End), Server (Back-End) and Shared components.\nTo actually run the Blazor Web Assembly app, move into the \u0026ldquo;Server\u0026rdquo; folder (cd Server\u0026hellip;) and kick off the \u0026ldquo;dotnet run\u0026rdquo; command: 1 dotnet run This starts with compiling (Building\u0026hellip;) the app, and showing a successful run, exposing the different ports the app is listening on (https and http)\nOpen your favorite browser, and connect to the https://localhost: address; easiest (on Windows) is Ctrl+click and selecting the URL. You now have a fully functional Blazor App running in the browser. Congratulations. (for details on what the app is about, feel free to check my notes in my previous article)\nMore dotnet Blazor command line options While I could have stopped the article here and thanking you for following along, I wanted to emphasize some other capabilities from the dotnet commandline, and covering some of the additional parameters to choose from. Note: I will only touch on the Blazor-specific options, not all the overall dotnet commandline options available.\nTo get an idea about all different options available, run the following command: 1 dotnet new blazorwasm --help deploy a specific Framework version With different .NET Framework versions available on the same developer station, it might be necessary to specify a specific version of .NET to use; this is possible by adding the -f or \u0026ndash;framework parameter to the dotnet new blazorwasm syntax, followed by the version identifier net5.0, net6.0 or netcoreapp3.1\ninclude ASP.NET Core host We used this parameter in the previous steps, but I didn\u0026rsquo;t really explain what it did. If you want to build a \u0026ldquo;client\u0026rdquo; Web Assembly version, which runs with an ASP.NET Server-backend, you need to specify the -ho or \u0026ndash;hosted parameter\nLet\u0026rsquo;s run a similar command as before to create a Blazor Web Assembly app, without specifying the \u0026ndash;hosted parameter, to see the difference:\n1 dotnet new blazorwasm Once created, check the file structure of this new application folder:\nAs you can see, there is no separation for the Client and Server code files, but we only have the Pages and Shared folder.\nIntegrate Azure AD authentication It might be required to integrate authentication into your Blazor Web Assembly app, and why not considering Azure Active Directory for this, right? While there is a bit more required than what the commandline parameters give you, it\u0026rsquo;s a great starting point, deploying a new Blazor app which is pre-authentication ready. To do this, specify the -au or \u0026ndash;auth parameter\n1 dotnet new blazorwasm -au individual The creation process is about the same as before; so let\u0026rsquo;s trigger another dotnet run action and connect to the app from the browser:\nNice! There is a prompt here, informing us to customize the Program.cs file, and provide the necessary Azure AD Authentication for our application identity\nLet\u0026rsquo;s have a look at the Program.cs file, which also contains a little snippet and pointer where to add the necessary Azure AD Authentication settings and where to find additional info in the docs.\nRunning Blazor App as Progressive Web App I won\u0026rsquo;t drill down on all the details on what a Progressive Web App is about, but in short, it allows you to turn your Web Assembly browser-based app into a \u0026ldquo;desktop\u0026rdquo;-mode application, or even using it \u0026ldquo;offline\u0026rdquo; (depending on app specifics). This is done by defining the -p or \u0026ndash;pwa parameter.\nLet\u0026rsquo;s try it out:\n1 2 3 dotnet new blazorwasm -p dotnet run and testing it again from the browser by connecting to https://localhost:\nFrom the browser properties, navigate to Apps and select Install this app\nand confirming the popup prompt Install once more.\nOnce installed, you can set some additional settings by clicking the Allow button.\nFrom here, your app will run in a separate docked window, just like any other Windows Application. You could also add a shortcut to the desktop, taskbar or start Menu.\nSummary In this post, I introduced you to creating your First Blazor Web Assembly App, using the dotnet commandline syntax. Starting from the base Blazorwasm template creation, I also covered several interesting creation parameters that could come in handy when creating Blazor Web Assembly apps, directly from the commandline.\nIn a next Blazor-related post, I\u0026rsquo;ll walk you through some fundamental layout customization options, changing the look and feel of the navigation bar, the top bar and the actual web app pages themselves by introducing HTML and CSS primarily.\nFor now, take care of yourself and your family, see you again soon with more Blazor-news.\nCheers, Peter\n","date":"2021-11-08T00:00:00Z","permalink":"/post/deploying-blazor-apps-using-dotnet-commandline/","title":"Deploying .NET6.0 Blazor App using dotnet commandline"},{"content":"Hello readers,\nAbout 2 months ago, I promised I would start writing down my adventures in the DotNet Blazor development world, which you can read in my first Blazor-related post here.\nWhile that post was more of a \u0026ldquo;setting the scene\u0026rdquo; how I ended up in learning Blazor (and C# mainly) and what the differences are between Blazor Server and Blazor WebAssembly, it also listed up the TOP 8 objectives I want to get out of these articles.\nLet\u0026rsquo;s kick it off with the first one, Deploying your first Blazor Server App\nPrerequisites To make sure you are ready to go and follow-along, let me list up some prereqs:\nVisual Studio IDE Any flavor of Visual Studio 2019 or later should work (know that 2022 is getting launched Nov 8th\u0026hellip;), and depending on your situation, you might already have access to a licensed edition of Standard, Professional or Enterprise from your employer. If not, totally fine, as there is also a free Community Edition available from the link I shared\n.NET RunTime In order to run C# and .NET applications, one needs to have the necessary .NET RunTime installed on the development workstation. In a later article, I\u0026rsquo;ll describe how you can publish Blazor apps to Azure App Services or Containerized workloads, where you will notice the .NET RunTime is required as well. If you are running Visual Studio 2019, install the .NET 5.0 RunTime; if, like me, you are running Visual Studio 2022 Preview, you can directly go for .NET 6.0\n(I\u0026rsquo;m running Windows 11, Visual Studio 2022 Preview 7.0, which means it could look a bit different on your machine, although most steps will be identical\u0026hellip;)\nUsing Visual Studio IDE to Create a Blazor Server App Assuming you have all the prereqs covered, you can create your first Blazor Server App by going through the following steps:\nLaunch Visual Studio on your machine, and select Create a new Project In the search box, type blazor Notice there is a different template for a Blazor Server app or a Blazor Web Assembly app; for now, select Blazor Server App + Next\nIn the Configure Your New Project step, set a name for your new project, for example MyFirstBlazorApp, and update the location if needed (Notice how VStudio is by default pointing to your user\u0026rsquo;s profile directory, creating a sources and repos subfolder structure)\nThis brings us to the Additional Information step, where you specify the Framework, which (preferably, but not required\u0026hellip;) is the latest .NET 6.0 (Long-Term Support) Confirm by clicking the Create button. After only a few minutes, the new Project got created and is available for \u0026ldquo;customizing\u0026rdquo;. Before launching the app and see it in action, let\u0026rsquo;s quickly describe the core application folder/file structure: (1). Solution - the way Visual Studio combines all code; a Solution can have a single project or multiple projects (2). Project - A project is a combination of dev source code, which gets compiled into a workable application (the runtime) (3). Data - SubFolder, which contains classes, presenting data; in this example, it generates weatherforecast information randomly (4). Pages - Blazor is using razor-pages, which are resonsible for the actual layout of a web page. It typically has a @page identifier for the actual page, followed by an HTML-code section and an actual C#-code section (5). Shared - Blazor can share code (pages) between a Server and Web Assembly (Client) project. Those pages will preferably be saved in the Shared folder, to avoid duplicating source code between both (6). Appsettings.JSON - this file contains application settings to run, for example Logging information, Database Connection Strings, Authentication Keys,... (7). Program.cs - The actual \u0026quot;core\u0026quot; of the application runtime; this is where you define which services should be used, amongst other coding information where relevant Run the sample app (in Debug Mode) by pressing \u0026ldquo;F5\u0026rdquo; orby Right-click on the Project / Debug / Run Instance This starts running the application on a dynamic browser port in your default browser, as well as automatically switching Visual Studio to the Diagnostics and Error blades The application loads in the browser and shows the layout of the app; feel free to click around the different menu options in the left Navigation Bar and become familiar with the base app functionality\nNavigation Bar is the left menu, which allows you to easily navigate across your application pages The middle section is loading a specific razor-page, displaying HTML layout and data (the WeatherForecast information) Top Menu bar, currently only having an \u0026ldquo;About\u0026rdquo; hyperlink to the .NET website From the Navigation Bar, select Counter; this opens the \u0026ldquo;Counter\u0026rdquo; page, which has a button, responding to each Click, and changing the value of the Current count field. Switch back to Visual Studio, and open the Counter.razor file, displaying the actual code content. Notice the first section (@page) has a pointer to /counter; this is called a route. (If you would switch back to the browser, you will see that once you select the Counter option in the Navigation Bar, the URL switches to /counter; if you navigate to Fetch Data, the route will switch to /fetchdata, which is loading the FetchData.Razor page file. If you navigate to Home, it\u0026rsquo;s loading the index.html page from the Pages-directory). The @code section of the counter.razor page, is where the actual C#-code lives. While it only has a few lines of code here, it actually works fine.\nThe code section\n1 private int currentCount = 0; specifies the currentCount field to be equal to zero. This happens every time the application is loaded (yes, you can try that out\u0026hellip;).\nThe next code section,\n1 2 3 4 private void IncrementCount() { currentCount++; } is getting triggered whenever the \u0026ldquo;Click Me\u0026rdquo; button is getting clicked, because of the @onclick-event specified for the button HTML-object. Followed by a basic C#-code language function \u0026ldquo;++\u0026rdquo;, which means, add a value 1 to the current value of the object currentCount.\nEasy said, whenever you start the app, the counter value is 0, but gets increased with a value \u0026ldquo;1\u0026rdquo;, every time you click the \u0026ldquo;Click Me\u0026rdquo; button.\nDebugging a Blazor App Since we are in \u0026ldquo;Visual Studio Debug mode\u0026rdquo; (I\u0026rsquo;ll write much more on that in a later article\u0026hellip;), let me briefly show you what it allows you to do. In short, it allows you to set a breakpoint, which gets prompted for during the run time of the application. To set a breakpoint, move your mouse pointer to the front of a line of code (or a code section) (the grey bar), and click. This adds a red dot, which reflects the breakpoint. From here, switch back to the application in the browser, and click the \u0026ldquo;Click Me\u0026rdquo; button again in the Counter page. Notice how you get brought back into Visual Studio, where the breakpoint got updated with a yellow arrow, identifying where you are in the debugging (we only have 1 breakpoint for now, but very convenient if you have several of those set\u0026hellip;). It will also show the actual value of the CurrentCount in a little popup balloon message, as well as below in the Autos section\nQuit Debugging-mode by pressing Shift-F5, or by clicking the Stop button (red square button in the top menu) in Visual Studio, or by closing the browser that\u0026rsquo;s running the Blazor app.\nLast, clear the Breakpoint in Visual Studio by clicking on it again.\nSummary In this post, I introduced you to creating your First Blazor Server App, using the Visual Studio template for this application type. I described the core folder/file structure of your Blazor Project, as well as explaining some of the base concepts of razor pages. You learned how to run your application, as well as using the basics of debugging, by setting a breakpoint and validating the outcome.\nIn a next Blazor-related post, I\u0026rsquo;ll walk you through some fundamental layout customization options, changing the look and feel of the navigation bar, the top bar and the actual web app pages themselves by introducing HTML and CSS primarily.\nBtw, if you are interested in developing with Blazor, you can hire a Blazor developer from Toptal, a leading platform for connecting top-tier developers with clients.\nFor now, take care of yourself and your family, see you again soon with more Blazor-news.\nCheers, Peter\n","date":"2021-11-07T00:00:00Z","permalink":"/post/coding-apps-in-blazor-from-a-non-developer---part-2/","title":"Coding Apps in Blazor from a non-developer standpoint - Part 2"},{"content":"Hey,\nApril last year, I wrote a post in which I looked back at my 1st 6 months as an Azure Technical Trainer. Where I think it makes sense to update this post in the week I\u0026rsquo;m officially starting my 3rd year in the role :).\nApril 2020 was literally in the middle of the (first) pandemic lock-down, where we all thought it would clear out by May; little did we know it would last for another 18 months from there, facing another 2 or 3 lockdowns and still not seeing the end of it, although there is some tiny spark of light at the end of the tunnel.\nFrom a job role perspective, the biggest challenge for me (and several others on the team) was shifting away from 100% in-person deliveries to 100% virtual deliveries. Even now 1,5 years later, I\u0026rsquo;m still not used to it. Is it going better, sure! For several reasons:\nHonestly, Teams has improved dramatically! It\u0026rsquo; more stable, brought it more features (GIFs in chat, hand gesture, background effects, live captions, participant views,\u0026hellip; ) - more to be found on what got introduced and what the Teams team has on the roadmap:\nAttendees have accepted the virtual life; This was a big shift noticeable for me after only a few weeks of VILT deliveries. Participants have their camera on, are building up more interaction by asking more questions, and also accepted the \u0026ldquo;work from home\u0026rdquo; noises. It\u0026rsquo;s actually joyful to hear a little baby crying, to hear a dog barking, to hear a doorbell ringing for another package delivery,\u0026hellip; which was not always (not at all??) accepted before the pandemic. I honestly hope this mindset keeps hanging on after we switch back to a mixed world.\nWhiteboard brings dynamics to the class; if you haven\u0026rsquo;t used the Microsoft WhiteBoard App yet, give it a spin! You won\u0026rsquo;t be disappointed. I\u0026rsquo;m using this for about 70% of my deliveries (the other 30% is live demos, I don\u0026rsquo;t use any single slide anymore the whole week\u0026hellip;). Whiteboarding is something I started doing long time ago during in-person deliveries, and I kept using them in virtual deliveries. It helps building up the story, it helps attendees learning in a different way by seeing it visual and not just hearing about it from the trainer,\u0026hellip; and it\u0026rsquo;s also more dynamic than a static image - which typically feels overwhelming and complex as it shows the end-state of a solution, but not how to get there.\nThe biggest joy in the last few weeks, basically when the new fiscal started, was the Open Hacks moving into our team as well. If you haven\u0026rsquo;t heard about Open Hacks, it\u0026rsquo;s one of the best learning offerings Microsoft currently has in my opinion. Instead of listening to a trainer, an attendee actually needs to figure out - in a team of 5-6 participants typically - how to complete challenges. Each Open Hack comes with a specific focus (e.g. Migrating workloads, Containers, Serverless, DevOps,\u0026hellip;) and has 8 or 9 challenges to complete. Starting from real-life scenarios, your team\u0026rsquo;s task is to figure out how to do it, discuss on a strategy, read through Microsoft Docs and get on with it. Before moving on to the next challenge, a technical coach (that\u0026rsquo;s the Microsoft Technical Trainer Team) reviews the success criteria and allows you to move on to the next challenge. The key successfactor here is the mix of backgrounds and experiences in each team, the learning method itself, which has a heavy \u0026ldquo;do it\u0026rdquo; mindset, and overall the team collaboration.\nI actually assisted in coaching Open Hacks in the early days of the program more than 3 years ago already before I joined Microsoft. Consider seeing 400-500 people in a large conference hall \u0026ldquo;Hacking\u0026rdquo;, getting frustrated, evesdropping on other teams to pick up what solution they have for a given challenge,\u0026hellip; wonderful; quite impressive to see how that in-person model has nicely shifted to a virtual experience. Although, nothing beats that in-person experience!\nLast, I had the amazing opportunity to join the DevOps Cloud Advocacy team as a v-member. This means you can contribute to the success of the team, without officially being in that job role. I personally enjoyed (and still am!!) this, because it not only forced me to learn much more about Azure DevOps, GitHub Actions and overall DevOps concepts (Scrum to name one), it also expanded my network of amazing technical folks within the larger Microsoft world. I presented multiple internal and public sessions on DevOps subjects, wrote several public blog posts and overall helped the team in what they are doing. I co-authored docs around Azure Static WebApps, reviewed Bicep Learning Path,\u0026hellip; and so much more!!\nSome links to my artifacts:\nLearn Git - Ep1 on LearnTV Introduction to Azure DevOps - The 425 Show DevBlogs article - ADO Audit Stream DevBlogs article - Grafana DevBlogs article - Quality Gates DevBlogs article - Service Principals \u0026amp; Managed Identities Bicep Learning Path Static Web Apps with ARM deployment As you can see, there are quite some dynamics in the Microsoft employee world, with a lot of interesting opportunities, challenges (outside of the Open Hack ones :p) and every few weeks there is something coming up that allows you to grab it and contribute to the success of your team, or other teams. I love those dynamics a lot, and get the respect and recognition from my manager, my colleagues within my own team and outside. So I\u0026rsquo;m sure I\u0026rsquo;m going to keep on doing this for another while :).\nLooking forward to my next \u0026ldquo;live as an ATT\u0026rdquo; post in about a year from now, who knows what this role is bringing to the table by then. I personally hope for traveling again, at first to meet several of my peers in-person for the first time, but also for being in front of a classroom again to really see what\u0026rsquo;s going on in an attendee\u0026rsquo;s mind during my deliveries, bringing those coffee-corner chat moments back, and the traditional Thursday-evening drinks with my learners.\nI miss those moments\u0026hellip; but I\u0026rsquo;m still extremely loving my role!\n/Peter\n","date":"2021-09-21T00:00:00Z","permalink":"/post/my-first-2-years-as-att/","title":"My 2 years as an Azure Technical Trainer at Microsoft"},{"content":"Hello readers,\nThe ones who know me already, know I have used traditional on-premises datacenter infrastructure for the first 15 years of my career, before I jumped onto Azure public cloud. Yes, I was an infra guy. And sometimes I still think I am, although I\u0026rsquo;m more and more shifting to containers and devops over the last 3 years.\nWith the 25 years of IT experience, there was always 1 skillset missing\u0026hellip; coding, or learning a development language in better words.\nAfter talking to several DevOps folks within Microsoft and elsewhere, it became clear I had to learn some language, if I wanted to take this DevOps thing serious (trust me, it is not required, but definitely recommended, now I look back how I talk about DevOps with some development skills acquired)\nSo many languages to choose from Once I set my mind to it, the next question was, what language am I going to learn myself?\nPython seemed the easiest, is quite popular, but didn\u0026rsquo;t appeal to me for still an unknown reason. Java seemed the most professional, but also the most complex. Go looked promising, but I never really seen it in action. C# and DotNet was like the natural go to, as we are using a lot of DotNet examples during different Azure workshops I\u0026rsquo;m delivering every week DotNet as my logical example Within the DotNet (https://dotnet.microsoft.com/learn/dotnet/what-is-dotnet) family, you still have a few different options:\nDotNet Core, which gives you cross-platform .NET implementation for browsers, apps on any platform OS DotNet Framework, supporting full Windows applications, and websites Xamarin/Mono, which is a DotNet implementation for mobile apps How I ended up with Blazor If I was going to develop \u0026ldquo;something\u0026rdquo;, it would probably be a console app (easy to demo) or a web application (perfect for my Azure training deliveries and I can run it in Azure and Containers -\u0026gt; bonus) From there, my mind was set to start developing Web Applications, and more specifically by using Blazor (https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor).\nOver Christmas holidays, I started building my library of learning material, which consisted of Microsoft Docs, Youtube videos and other community sessions. (I\u0026rsquo;ll cover some of these in another article later)\nI also started working on building an app from scratch, which would make my life as an Azure trainer easier, as well as for my colleagues.\nI managed to build a \u0026ldquo;useable\u0026rdquo; web application over the course of a few months, spending about 10 hours a week; As I approached 3000 followers on Twitter recently, I decided to come up with a series of posts on Blazor, explaining what I learned, where I struggled (and still am), to help others who are like myself, starting with no dev experience whatsoever.\nWhat is Blazor Blazor comes in 2 different flavors:\nBlazor Server Blazor Web Assembly Blazor Server is closest to a traditional ASP.Net application, running on a web server, which can be Windows or Linux, as well as a containerized platform. Updates in the web app layout, the actual events (clicking buttons, routing pages,\u0026hellip;) and JavaScript handling (yes, I\u0026rsquo;ll detail that in another article) are all transferred between client (your browser) and the server (the backend) using SignalR. Think of this as a messaging handler between client and server.\nBlazor WebAssembly is a 2nd flavor, which doesn\u0026rsquo;t require a server back-end, but rather runs all DotNet code directly in the browser. This is not a DotNet something, but rather a capability of WebAssembly (WASM in short), an open standard which aims to allow running powerful applications natively in a browser. If any Server-side events are needed, you can integrate it with Blazor Server or other API-based back-ends.\nBlazor as terminology is coming from a combination of \u0026ldquo;Browser\u0026rdquo; and \u0026ldquo;Razor\u0026rdquo; (https://docs.microsoft.com/en-us/aspnet/core/razor-pages/?view=aspnetcore-5.0\u0026tabs=visual-studio), if you were wondering.\nThe way I see it (as non-developer :) ), is that those Razor Pages are like a simplified programming language in itself, combining HTML layout controls and actual C# coding together. By routing Razor Pages across different Razor-files (cshtml as extension), you build up your application. They are also recognized by the \u0026ldquo;@page\u0026rdquo; directive at the beginning of each file.\nBelow is a sample Razor Page, coming from the default Blazor Server or Blazor Web Assembly template in Visual Studio - which I will describe in a later blog post how to deploy it and what it does).\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 @page \u0026#34;/counter\u0026#34; \u0026lt;h1\u0026gt;Counter\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;Current count: @currentCount\u0026lt;/p\u0026gt; \u0026lt;button class=\u0026#34;btn btn-primary\u0026#34; @onclick=\u0026#34;IncrementCount\u0026#34;\u0026gt;Click me\u0026lt;/button\u0026gt; @code { private int currentCount = 0; private void IncrementCount() { currentCount++; } } As you can see, the @page directive points to the \u0026ldquo;name\u0026rdquo; of this web page, being the \u0026ldquo;counter page\u0026rdquo;. Thinks of this as browsing to https://yourwebsiteURL/counter\nnext, there is a bit of HTML code for the actual layout of the page, and last, it contains some C# code with the actual intelligence of the counter button.\nThe way this page looks in the browser is like this:\nWhat am I going to do from here? As promised, my idea is to share as much of what I learned from Blazor in the last few months, and taking you through a process to start learning to build your own Blazor applications. The following will be covered over a series of articles in the next coming weeks:\nDeploying your first Blazor Server App Customizing the basic layout Updating Navigation Menu items Creating API Controllers to read data Integrating Entity Framework to read data from SQL DB Building forms for CRUD (create, read, update, delete) operations Integrating with external API Services to read data Publishing Blazor Server to Azure App Services I hope you will learn from this, and enjoy the journey as much as I did, and still do. While I am far from calling myself a developer, it feels rather rewarding to see how code can be turned into a useful application!\nBtw, if you are interested in developing with Blazor, you can hire a Blazor developer from Toptal, a leading platform for connecting top-tier developers with clients.\nTalk to you soon,\nCheers, Peter\n","date":"2021-09-06T00:00:00Z","permalink":"/post/coding-apps-in-blazor-from-a-non-developer/","title":"Coding Apps in Blazor from a non-developer standpoint"},{"content":"As most of you know, I enjoy writing technical (Azure related) books, but over the last year, I didn\u0026rsquo;t focus that much on writing myself, but supporting other authors in their writing-journey as well as performing technical reviewing of the book they are writing.\nThe one I want to highlight in this post, has an interesting back-story. Mid 2019, I got approached by Packt, to write an update to their best-seller title \u0026ldquo;Azure Strategy and Implementation Guide\u0026rdquo;, in sponsorship with Microsoft. That was the third edition, which got released Oct 2019. About a year later, I got asked to write another update. As I was a Microsoft employee in meantime, and the book got sponsored by Microsoft, it could be a bit tricky :), so I decided to pull out of the writing process, avoiding conflicts of interest.\nInstead, I referred the editor to a few other Azure experts and authors I knew, but still offering my technical skills in a reviewing process.\nSo here it is, the \u0026ldquo;Azure Strategy and Implementation Guide\u0026rdquo;, 4th edition\nDon\u0026rsquo;t let the reference to \u0026ldquo;fourth edition\u0026rdquo; fool you, there has been a massive rewrite of several chapters, with fresh new content, more technical information and new chapters were added as well.\nAs technical reviewer, I mainly took on the responsibility of making sure the content was technically accurate. This involved not only the textual paragraphs and descriptions, but also the reference to any hands-on step-by-step guidance as well. While this book is targeted to cloud architects or cloud solution engineers - who are exploring the cloud transformation for their organization or their customers - , it is not just covering the high-level capability of several Azure services, but also takes the reader onto a journey about different use cases, how different services relate to each other and more.\nAbout the book By reading through this cookbook, you will be able to:\nUnderstand core Azure infrastructure technologies and solutions Carry out detailed planning for migrating applications to the cloud with Azure Deploy and run Azure infrastructure services Define roles and responsibilities in DevOps Get a firm grip on security fundamentals Carry out cost optimization in Azure This book is designed to benefit Azure architects, cloud solution architects, Azure developers, Azure administrators, and anyone who wants to develop expertise in operating and administering the Azure cloud. Basic familiarity with operating systems and databases will help you grasp the concepts covered in this book.\nMicrosoft Azure is a powerful cloud computing platform that offers a multitude of services and capabilities for organizations of any size pursuing a cloud strategy. This fourth edition discusses the latest updates on security fundamentals, hybrid cloud, cloud migration, Microsoft Azure Active Directory, and Windows Virtual Desktop. It encapsulates the entire spectrum of measures involved in Azure deployment, including understanding Azure fundamentals, choosing a suitable cloud architecture, building on design principles, becoming familiar with Azure DevOps, and learning best practices for optimization and management. The book begins by introducing you to the Azure cloud platform and demonstrating the substantial scope of digital transformation and innovation that can be achieved with Azure\u0026rsquo;s capabilities. It then provides practical insights on application modernization, Azure Infrastructure as a Service (IaaS) deployment, infrastructure management, key application architectures, best practices of Azure DevOps, and Azure automation. By the end of the book, you will have acquired the essential skills to drive Azure operations from the planning and cloud migration stage to cost management and troubleshooting.\nTable of Contents Introduction what is Azure, Public, Private, Hybrid Cloud Automation and Governance How Infrastructure as Code and DevOps help optimizing your deployments Modernizing with Hybrid Cloud and Multi-Cloud What makes multi-cloud a (successful) strategy Azure Stack on-premises Cloud Migration Planning Cloud Adoption Framework (CAF) Azure Well-Architected Framework (WAF) Windows Virtual Desktop (WVD) Cloud-running desktops with cloud power Cloud Security Fundamentals to fight cybercrime Azure Security Operations \u0026amp; Monitoring Azure Security Center \u0026amp; Azure Sentinel SIEM Cost Optimization Cost models and forecasting 200 pages of concise, to-the-point, guidance and best-practices content!\nMy feedback I have to be honest, doing technical reviewing of this book was an interesting ride for me. Being an author myself, especially on the same book only a year ago felt weird at first. Since it was an update, parts of the 3rd edition got re-used. So it\u0026rsquo;s funny to have comments on your own paragraphs and content, but at the same time it\u0026rsquo;s very rewarding to see how the baseline I set out, helped in making this fourth edition even better.\nThe authors did a really good job on focusing on the updated content, highlighting the changes happened in Azure in only a year\u0026rsquo;s time, and showing several of the upcoming changes around WVD and security as prime topics.\nFeel free to reach out if you got any more questions on this book or its content. Grab your free (Microsoft sponsored) copy from the Azure web site:\nhttps://azure.microsoft.com/en-us/resources/azure-strategy-and-implementation-guide-fourth-edition/\nand happy reading!\nDon\u0026rsquo;t hesitate reaching out in case you should have any questions on this book or Azure in general. peter@pdtit.be or @pdtit on Twitter\nStay safe and healthy you all!\nIf you enjoyed reading this article or any other post here, feel free to share your appreciation /Peter\n","date":"2021-06-18T00:00:00Z","permalink":"/post/azure-strategy---reviewing-done/","title":"Another Tech Reviewing done: Azure Strategy and Implementation Guide - 4th edition"},{"content":"Ever since I joined Microsoft (Sept 2019) and started working in the Azure Technical Trainer team, I deployed a demo Azure Kubernetes Service (AKS) with a few sample containers. Helping me in walking training attendees through the architecture, the management concepts and what it takes to run containerized workloads using the advanced capabilities coming with Kubernetes on Azure.\nKnowing this AKS cluster got deployed about 20 months back, it also meant my setup was getting a little bit out-of-date. Interesting enough, it ran for almost 500 days (I considered waiting to publish this article to celebrate its anniversary\u0026hellip;)\nVersion strategy AKS is following the overall Kubernetes supportability in regards to versioning. More details from the below links in the docs:\nhttps://docs.microsoft.com/en-us/azure/aks/support-policies\nhttps://github.com/Azure/AKS/releases\nhttps://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions\nIn short, the Kubernetes community released minor versions about every 3 months, and major releases every 9 months approximately. As of version 1.19, this got extended to 12 months support.\nWhat this means, is that you see a list of \u0026ldquo;versions\u0026rdquo; available in Azure, for both new and existing deployments of AKS environments. At the time of deploying my cluster, it seems the active version was 1.7.7 (I could pull this up from my AKS Resource Group / Deployment history)\nI assume I picked the [default] version at the time of deploying, which would mean there were 3 minor versions before, and 3 minor versions ahead.\nVersion 1.18 End of Life Earlier this week, I got an internal note from our back-end security team, informing me about the AKS version 1.18 getting deprecated by June 30th, 2021 (yes, in about 2 weeks from now), and I needed to upgrade to at least 1.19.\nhttps://github.com/Azure/AKS/releases Upgrade Process One of the core strenghts of Kubernetes (AKS and other flavors) is how it handles seamless upgrades of its worker nodes. In short, each worker node in the cluster gets upgraded, and introduced to the cluster only when all health checks have passed successfully. If something would go wrong, the upgrade won\u0026rsquo;t be flagged - but your running containers won\u0026rsquo;t even notice any interruption either. After a successful upgrade of an existing (or introduction of a new) node to the cluster, your containerized workloads will just be started and running as expected.\nTo safe me from near-future upgrade tasks, I decided I wanted to upgrade to the most current version available (1.20.7 in my case). Which meant performing a \u0026ldquo;double\u0026rdquo; upgrade, from 1.18 major version to 1.19 major version, followed by another upgrade to the 1.20 major version.\nI used the portal for these steps, as they are really easy to perform, but know Azure CLI or template based scenarios are also an option.\nFrom the Azure Portal, browse to your AKS cluster resource. In the Overview section, notice the \u0026ldquo;Kubernetes version\u0026rdquo; parameter.\nSelect the version number; this brings you to the upgrade blade\nSelect the Upgrade Version and choose the version of choice. (In my case, the highest was 1.19.11). I also selected to upgrade control plane + all node pools\nConfirm and wait for the upgrade process to kick off and complete successfully. This took about 6 minutes in my case.\nOnce this version 1.19.11 upgrade was done, I moved on with repeating the same steps, but this time selecting version 1.20.7 This process took another 7-8 minutes on my end.\nThat\u0026rsquo;s all!!\nSummary In this article, I wanted to share some insights on the Kubernetes, and more specific AKS upgrade policy and process. Thanks to the architecture and orchestration of Kubernetes, upgrading versions is a rather smooth and almost seamless process. While it worked fine for a almost 18-month all cluster setup, I would definitely recommend keeping up with versions faster, instead of - what I did - waiting longer.\nGot any questions, don\u0026rsquo;t hesitate reaching out! peter@pdtit.be or @pdtit on Twitter :)\n/Peter\n","date":"2021-06-18T00:00:00Z","permalink":"/post/upgrading_aks_in_20min/","title":"Upgrading an AKS cluster in 20 min"},{"content":"Hey,\nI got some GREAT NEWS to share, based on an email I received earlier this week:\nwhich informed me I am recognized as a HashiCorp Ambassador!!\nI can imagine that some of you might not be familiar with this recognition. Easy said, it is much like the Microsoft MVP title, rewarding individuals for their outstanding contributions in the technical communities.\nIn my personal case, my first adventures with Terraform started about 5 years ago. I was struggling with Azure ARM templates, and Terraform seemed like an easier language to achieve the same purpose, deploying Azure resources.\nFrom there, I started presenting at international User Group events on Terraform (did around 12 sessions in about 24 months\u0026hellip;), and integrating it in my Azure workshops. (The biggest reference was a German customer I helped migrating to Azure, where the year after, one of their cloud leads presented at HashiConf.\nWhen I moved to Microsoft in Sept 2019, I didn\u0026rsquo;t stop including Terraform in my demos on Infrastructure as Code across all Azure trainings I\u0026rsquo;m delivering.\nI created courseware on Terraform for Azure, contributed to GitHub projects on the same and overall continue using it myself.\nEarlier this year, I was a presenter at HashiTalks 2021, talking about the different ways to authenticate Terraform to Azure.\nBesides spreading the word on Terraform and other HashiCorp tools, it gives me the opportunity to participate in roundtables with the product teams and go through beta-testing of updates or new product features, as one of the early testers. And it\u0026rsquo;s also nice to have a stronger representation in the broader HashiCorp Ambassador family.\nThe same as with the Microsoft MVP status in past years, it is a nice recognition for the community efforts. I don\u0026rsquo;t take this award lightly and am very proud and honored to have received it.\nI always did - and will continue doing community activities - for the community, not for the title.\nIf you got any questions on HashiCorp, Terraform or the Ambassador program, feel free to reach out.\n/Peter\n","date":"2021-04-17T00:00:00Z","permalink":"/post/hashicorp-ambassador/","title":"I'm recognized as a HashiCorp Ambassador"},{"content":"\nHey everyone,\nThanks for joining the Azure Spring Clean online event again, in which the Azure community steps up once more, sharing the best tips \u0026amp; tricks on how to keep your Azure environments clean. Discussing optimizations, covering new services and features or overall giving you a view on how to manage your Azure subscriptions even better.\nYou can check out all other blog posts or videos, which can guide you with best practices, lessons learned, or help you with some of the more difficult Azure Management topics at Azure Spring Clean.\nYou can also keep an eye on Twitter for the hashtag #AzureSpringClean so you wonâ€™t miss any of these Azure â€œspringâ€ cleaning tips.\nI had the joy of participating again this year and decided to share a bit about IAC - Infrastructure As Code, sharing my view on some of the interesting tools and practices that could help you in automating your Azure deployments.\nWhat is Infrastructure As Code By using Infrastructure as Code, you define the infrastructure that needs to be deployed. The infrastructure code becomes part of your project. Just like the application source code, you store the infrastructure code in a source repository (GitHub, Azure Repos,\u0026hellip;) and version it. Anyone on your team can run the code and deploy similar environments. Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology), but can also be used to deploy the baseline of your platform services (App Services, Functions, Database services,\u0026hellip;). It is using a descriptive model, relying on the same versioning concept used by DevOps teams for their source code.\nInfrastructure as Code helps in avoiding or minimizing the problem of environment drift during a release deployment. Without IaC, a cloud team must maintain the settings of individual deployment environments (Dev/Test, Staging, Production). Over time, each environment tends to become a snowflake, that is, a unique configuration that cannot be reproduced automatically. This also leads to inconsistency among environments which again leads to issues during deployments. With snowflakes, most deployments and maintenance of the underlying cloud infrastructure is based on manual processes, maybe a combination of stand-alone scripts coming from all over the place, are hard to track and are the main source for errors.\nAnother characteristic of IaC is Idempotence. Idempotence is the principal that a deployment command always sets the target environment into the same configuration, regardless of the environmentâ€™s starting state or regardless the environment itself. Idempotency is achieved by either automatically configuring an existing target or by discarding the existing target and redeploying it from scratch (Spring Clean anyone? :)).\nAccordingly, with IaC, cloud teams apply changes to the environment description and integrate versioning into the configuration model, which is typically in well-documented code formats such as JSON or YAML. If the environment should be reconfigured or changes should get applied, you edit the source (IaC-files), you are not directly touching on the target.\nTeams who implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual configuration of environments and enforce consistency by representing the desired state of their environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues caused by configuration drift or missing dependencies. DevOps teams can work together with a unified set of practices and tools to deliver applications and their supporting infrastructure rapidly, reliably, and at scale.\nWhere to get started Now you know what Infrastructure as Code is, as well as recognize some of the main benefits, the typical next question is where to get started. The good news is, you can start right away, since Azure provides a few mechanisms out-of-the-box to create, update or import templates, known as ARM Templates (Azure Resource Manager). (Amazon AWS is offering something similar called CloudFormation btw\u0026hellip;)\nBesides using the Microsoft ARM scenario, several third-party tools exist, allowing you to embrace all concepts of Infrastructure as Code. These tools are typically supporting multiple cloud platforms, and not just targeting one single cloud vendor.\nIn the remaining part of this article, I\u0026rsquo;ll share some insights in ARM Templates, as well as discussing some other tools that I\u0026rsquo;ve used over the years (and still am) with some specifics.\nARM Templates Probably the first scenario of using IaC in Azure is Azure Resource Manager (ARM) templatesThe template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.\nARM Templates can be authored in any editor (JSON is just text), but I can definitely recommend VS Code to do that. And make sure you install the ARM Template tools extension. Once you have your ARM template file(s), deploying them is possible from PowerShell, Azure CLI or directly from the Azure Portal.\nA sample ARM Template (which you can use right away\u0026hellip;) to deploy a Windows 2019 Virtual Machine with Visual Studio, looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 { \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;adminUsername\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;minLength\u0026#34;: 1, \u0026#34;defaultValue\u0026#34;: \u0026#34;labadmin\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;Username for the Virtual Machine.\u0026#34; } }, \u0026#34;adminPassword\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;securestring\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;L@BadminPa55w.rd\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;Password for the Virtual Machine.\u0026#34; } } }, \u0026#34;variables\u0026#34;: { \u0026#34;imagePublisher\u0026#34;: \u0026#34;MicrosoftVisualStudio\u0026#34;, \u0026#34;imageOffer\u0026#34;: \u0026#34;VisualStudio2019latest\u0026#34;, \u0026#34;imageSku\u0026#34;: \u0026#34;vs-2019-comm-latest-ws2019\u0026#34;, \u0026#34;OSDiskName\u0026#34;: \u0026#34;jumpvmosdisk\u0026#34;, \u0026#34;nicName\u0026#34;: \u0026#34;jumpvmnic\u0026#34;, \u0026#34;addressPrefix\u0026#34;: \u0026#34;10.1.0.0/16\u0026#34;, \u0026#34;subnetName\u0026#34;: \u0026#34;Subnet\u0026#34;, \u0026#34;subnetPrefix\u0026#34;: \u0026#34;10.1.0.0/24\u0026#34;, \u0026#34;vhdStorageType\u0026#34;: \u0026#34;Premium_LRS\u0026#34;, \u0026#34;publicIPAddressName\u0026#34;: \u0026#34;jumpvmip\u0026#34;, \u0026#34;publicIPAddressType\u0026#34;: \u0026#34;static\u0026#34;, \u0026#34;vhdStorageContainerName\u0026#34;: \u0026#34;vhds\u0026#34;, \u0026#34;vmName\u0026#34;: \u0026#34;jumpvm\u0026#34;, \u0026#34;vmSize\u0026#34;: \u0026#34;Standard_DS8_V2\u0026#34;, \u0026#34;virtualNetworkName\u0026#34;: \u0026#34;jumpvmVNet\u0026#34;, \u0026#34;vnetId\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Network/virtualNetworks\u0026#39;, variables(\u0026#39;virtualNetworkName\u0026#39;))]\u0026#34;, \u0026#34;subnetRef\u0026#34;: \u0026#34;[concat(variables(\u0026#39;vnetId\u0026#39;), \u0026#39;/subnets/\u0026#39;, variables(\u0026#39;subnetName\u0026#39;))]\u0026#34;, \u0026#34;vhdStorageAccountName\u0026#34;: \u0026#34;[concat(\u0026#39;vhdstorage\u0026#39;, uniqueString(resourceGroup().id))]\u0026#34;, \u0026#34;scriptFolder\u0026#34;: \u0026#34;.\u0026#34;, \u0026#34;scriptFileName\u0026#34;: \u0026#34;config-winvm.ps1\u0026#34;, \u0026#34;fileToBeCopied\u0026#34;: \u0026#34;ExtensionLog.txt\u0026#34; }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Storage/storageAccounts\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;vhdStorageAccountName\u0026#39;)]\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-01-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;StorageAccount\u0026#34; }, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;vhdStorageType\u0026#39;)]\u0026#34; }, \u0026#34;kind\u0026#34;: \u0026#34;Storage\u0026#34; }, { \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-30\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/publicIPAddresses\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;publicIPAddressName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;PublicIPAddress\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;publicIPAllocationMethod\u0026#34;: \u0026#34;[variables(\u0026#39;publicIPAddressType\u0026#39;)]\u0026#34; } }, { \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-30\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/virtualNetworks\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;virtualNetworkName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;VirtualNetwork\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;addressSpace\u0026#34;: { \u0026#34;addressPrefixes\u0026#34;: [ \u0026#34;[variables(\u0026#39;addressPrefix\u0026#39;)]\u0026#34; ] }, \u0026#34;subnets\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;subnetName\u0026#39;)]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;addressPrefix\u0026#34;: \u0026#34;[variables(\u0026#39;subnetPrefix\u0026#39;)]\u0026#34; } } ] } }, { \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-30\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Network/networkInterfaces\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;nicName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;NetworkInterface\u0026#34; }, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Network/publicIPAddresses/\u0026#39;, variables(\u0026#39;publicIPAddressName\u0026#39;))]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Network/virtualNetworks/\u0026#39;, variables(\u0026#39;virtualNetworkName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;ipConfigurations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;ipconfig1\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;privateIPAllocationMethod\u0026#34;: \u0026#34;Dynamic\u0026#34;, \u0026#34;publicIPAddress\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Network/publicIPAddresses\u0026#39;, variables(\u0026#39;publicIPAddressName\u0026#39;))]\u0026#34; }, \u0026#34;subnet\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;[variables(\u0026#39;subnetRef\u0026#39;)]\u0026#34; } } } ] } }, { \u0026#34;apiVersion\u0026#34;: \u0026#34;2015-06-15\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Compute/virtualMachines\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[variables(\u0026#39;vmName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;JumpVM\u0026#34; }, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Storage/storageAccounts/\u0026#39;, variables(\u0026#39;vhdStorageAccountName\u0026#39;))]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Network/networkInterfaces/\u0026#39;, variables(\u0026#39;nicName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;hardwareProfile\u0026#34;: { \u0026#34;vmSize\u0026#34;: \u0026#34;[variables(\u0026#39;vmSize\u0026#39;)]\u0026#34; }, \u0026#34;osProfile\u0026#34;: { \u0026#34;computerName\u0026#34;: \u0026#34;[variables(\u0026#39;vmName\u0026#39;)]\u0026#34;, \u0026#34;adminUsername\u0026#34;: \u0026#34;[parameters(\u0026#39;adminUsername\u0026#39;)]\u0026#34;, \u0026#34;adminPassword\u0026#34;: \u0026#34;[parameters(\u0026#39;adminPassword\u0026#39;)]\u0026#34; }, \u0026#34;storageProfile\u0026#34;: { \u0026#34;imageReference\u0026#34;: { \u0026#34;publisher\u0026#34;: \u0026#34;[variables(\u0026#39;imagePublisher\u0026#39;)]\u0026#34;, \u0026#34;offer\u0026#34;: \u0026#34;[variables(\u0026#39;imageOffer\u0026#39;)]\u0026#34;, \u0026#34;sku\u0026#34;: \u0026#34;[variables(\u0026#39;imageSku\u0026#39;)]\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;latest\u0026#34; }, \u0026#34;osDisk\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;osdisk\u0026#34;, \u0026#34;vhd\u0026#34;: { \u0026#34;uri\u0026#34;: \u0026#34;[concat(reference(resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, variables(\u0026#39;vhdStorageAccountName\u0026#39;)), \u0026#39;2016-01-01\u0026#39;).primaryEndpoints.blob, variables(\u0026#39;vhdStorageContainerName\u0026#39;), \u0026#39;/\u0026#39;, variables(\u0026#39;OSDiskName\u0026#39;), \u0026#39;.vhd\u0026#39;)]\u0026#34; }, \u0026#34;caching\u0026#34;: \u0026#34;ReadWrite\u0026#34;, \u0026#34;createOption\u0026#34;: \u0026#34;FromImage\u0026#34; } }, \u0026#34;networkProfile\u0026#34;: { \u0026#34;networkInterfaces\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Network/networkInterfaces\u0026#39;, variables(\u0026#39;nicName\u0026#39;))]\u0026#34; } ] }, \u0026#34;diagnosticsProfile\u0026#34;: { \u0026#34;bootDiagnostics\u0026#34;: { \u0026#34;enabled\u0026#34;: false } } }, \u0026#34;resources\u0026#34;: [ { \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-06-01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Compute/virtualMachines/extensions\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[concat(variables(\u0026#39;vmName\u0026#39;),\u0026#39;/\u0026#39;, \u0026#39;VMConfig\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[concat(\u0026#39;Microsoft.Compute/virtualMachines/\u0026#39;,variables(\u0026#39;vmName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;publisher\u0026#34;: \u0026#34;Microsoft.Compute\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;CustomScriptExtension\u0026#34;, \u0026#34;typeHandlerVersion\u0026#34;: \u0026#34;1.7\u0026#34;, \u0026#34;autoUpgradeMinorVersion\u0026#34;:true, \u0026#34;settings\u0026#34;: { \u0026#34;fileUris\u0026#34;: [ \u0026#34;https://raw.githubusercontent.com/pdtit/ARMtemplates/master/JumpVM/configurevm.ps1\u0026#34; ], \u0026#34;commandToExecute\u0026#34;: \u0026#34;powershell.exe -ExecutionPolicy Unrestricted -File configurevm.ps1\u0026#34; } } } ] } ], \u0026#34;outputs\u0026#34;: { \u0026#34;JumpVM Public IP address\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[reference(resourceId(\u0026#39;Microsoft.Network/publicIPAddresses\u0026#39;,variables(\u0026#39;publicIPAddressName\u0026#39;))).IpAddress]\u0026#34; } } } You can read more information on ARM templates at the following links:\nARM template documentation\nTo get a headstart on using and authoring your own templates, you can use an amazing GitHub repository called Azure Quickstart Templates, providing more than a 1000 sample templates to deploy about anything on Azure.\nIf you are pretty unknown in the domain of ARM templates, I could recommend these sources to practice: Tutorial: Create and deploy your first ARM template as well as Microsoft Learn: Build Azure Resource Manager templates\nI also rely on ARM Templates myself a lot. Feel free to grab a few of my sample templates from my GitHub repo\nTerraform Another really, really, really popular method of deploying your infrastructure to Azure is by using Terraform by Hashicorp. Hashicorp Terraform is an open-source tool for provisioning and managing cloud infrastructure, not just Azure. Using their own Terraform providers, it is possible to target more than 35 cloud backends (Azure, AWS, GCP, Kubernetes,\u0026hellip;)\nFollowing all concepts of IaC, with Terraform, you codify your infrastructure in configuration files in which you describe the topology of cloud resources. These resources include both Infrastructure as a Service (Virtual Machine, storage, network,\u0026hellip;) as Platform as a Service (App Services, Kubernetes, Monitoring,\u0026hellip;).\nSome benefits of Terraform, compared to ARM Templates:\n(WAY) easier syntax (using HCL - HashiCorp Configuration Language) Multi-platform aware (keep in mind this still requires creating platform-specific templates) Terraform CLI to interact and deploy templates Pre-flight capability: allowing you to validate and test your deployment, before running the actual deployment Terraform TFState - State file, which keeps track of an already executed deployment state and becomes the starting point for future deployment updates A sample Terraform template (you can use right away\u0026hellip;) to deploy an Ubuntu Virtual Machine on Azure, looks like this\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 # Configure the Microsoft Azure Provider provider \u0026#34;azurerm\u0026#34; { # The \u0026#34;feature\u0026#34; block is required for AzureRM provider 2.x. # If you\u0026#39;re using version 1.x, the \u0026#34;features\u0026#34; block is not allowed. version = \u0026#34;~\u0026gt;2.0\u0026#34; features {} } # Create a resource group if it doesn\u0026#39;t exist resource \u0026#34;azurerm_resource_group\u0026#34; \u0026#34;my1stTFRG\u0026#34; { name = \u0026#34;my1stTFRG\u0026#34; location = \u0026#34;eastus\u0026#34; tags = { environment = \u0026#34;TF Demo\u0026#34; } } # Create virtual network resource \u0026#34;azurerm_virtual_network\u0026#34; \u0026#34;my1stTFVNET\u0026#34; { name = \u0026#34;my1stTFVnet\u0026#34; address_space = [\u0026#34;10.0.0.0/16\u0026#34;] location = \u0026#34;eastus\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } # Create subnet resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;myterraformsubnet\u0026#34; { name = \u0026#34;mySubnet\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name virtual_network_name = azurerm_virtual_network.my1stTFVNET.name address_prefixes = [\u0026#34;10.0.1.0/24\u0026#34;] } # Create public IPs resource \u0026#34;azurerm_public_ip\u0026#34; \u0026#34;myterraformpublicip\u0026#34; { name = \u0026#34;myPublicIP\u0026#34; location = \u0026#34;eastus\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name allocation_method = \u0026#34;Dynamic\u0026#34; tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } # Create Network Security Group and rule resource \u0026#34;azurerm_network_security_group\u0026#34; \u0026#34;myterraformnsg\u0026#34; { name = \u0026#34;myNetworkSecurityGroup\u0026#34; location = \u0026#34;eastus\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name security_rule { name = \u0026#34;SSH\u0026#34; priority = 1001 direction = \u0026#34;Inbound\u0026#34; access = \u0026#34;Allow\u0026#34; protocol = \u0026#34;Tcp\u0026#34; source_port_range = \u0026#34;*\u0026#34; destination_port_range = \u0026#34;22\u0026#34; source_address_prefix = \u0026#34;*\u0026#34; destination_address_prefix = \u0026#34;*\u0026#34; } tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } # Create network interface resource \u0026#34;azurerm_network_interface\u0026#34; \u0026#34;myterraformnic\u0026#34; { name = \u0026#34;myNIC\u0026#34; location = \u0026#34;eastus\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name ip_configuration { name = \u0026#34;myNicConfiguration\u0026#34; subnet_id = azurerm_subnet.myterraformsubnet.id private_ip_address_allocation = \u0026#34;Dynamic\u0026#34; public_ip_address_id = azurerm_public_ip.myterraformpublicip.id } tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } # Connect the security group to the network interface resource \u0026#34;azurerm_network_interface_security_group_association\u0026#34; \u0026#34;example\u0026#34; { network_interface_id = azurerm_network_interface.myterraformnic.id network_security_group_id = azurerm_network_security_group.myterraformnsg.id } # Generate random text for a unique storage account name resource \u0026#34;random_id\u0026#34; \u0026#34;randomId\u0026#34; { keepers = { # Generate a new ID only when a new resource group is defined resource_group = azurerm_resource_group.my1stTFRG.name } byte_length = 8 } # Create storage account for boot diagnostics resource \u0026#34;azurerm_storage_account\u0026#34; \u0026#34;mystorageaccount\u0026#34; { name = \u0026#34;diag${random_id.randomId.hex}\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name location = \u0026#34;eastus\u0026#34; account_tier = \u0026#34;Standard\u0026#34; account_replication_type = \u0026#34;LRS\u0026#34; tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } # Create (and display) an SSH key resource \u0026#34;tls_private_key\u0026#34; \u0026#34;example_ssh\u0026#34; { algorithm = \u0026#34;RSA\u0026#34; rsa_bits = 4096 } output \u0026#34;tls_private_key\u0026#34; { value = \u0026#34;tls_private_key.example_ssh.private_key_pem\u0026#34; } # Create virtual machine resource \u0026#34;azurerm_linux_virtual_machine\u0026#34; \u0026#34;myterraformvm\u0026#34; { name = \u0026#34;myVM\u0026#34; location = \u0026#34;eastus\u0026#34; resource_group_name = azurerm_resource_group.my1stTFRG.name network_interface_ids = [azurerm_network_interface.myterraformnic.id] size = \u0026#34;Standard_DS4_v2\u0026#34; os_disk { name = \u0026#34;myOsDisk\u0026#34; caching = \u0026#34;ReadWrite\u0026#34; storage_account_type = \u0026#34;Premium_LRS\u0026#34; } source_image_reference { publisher = \u0026#34;Canonical\u0026#34; offer = \u0026#34;UbuntuServer\u0026#34; sku = \u0026#34;16.04.0-LTS\u0026#34; version = \u0026#34;latest\u0026#34; } computer_name = \u0026#34;myvm\u0026#34; admin_username = \u0026#34;azureuser\u0026#34; disable_password_authentication = true admin_ssh_key { username = \u0026#34;azureuser\u0026#34; public_key = tls_private_key.example_ssh.public_key_openssh } boot_diagnostics { storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint } tags = { environment = \u0026#34;Terraform Demo\u0026#34; } } As should be clear, the syntax is rather straight forward, intuitive and clean (Azure Spring Clean anyone\u0026hellip;?)\nYou can find more details on Terraform for Azure here or you could grab a few of my sample template files from my Github Repo.\nPulumi Infrastructure as Code as we already know it, typically uses language-independent data formats, such as JSON or YAML to define our infrastructure. Terraform is slightly different, and uses a Domain Specific Language (DSL), Hashicorp Configuration Language (HCL) to construct our templates.\nThis is where Pulumi is slightly different. With Pulumi, we donâ€™t need to learn a DSL or use JSON or YAML. If weâ€™re already familiar with a programming language, think of DotNet, Java, Python,\u0026hellip; Pulumi allows you to define your cloud infrastructure using that exact same development language. Which also means you can leverage the standard functions within those programming languages too, things like loops, variables, error handling etc.\nThese functions are available in the other tools weâ€™ve mentioned too. For example, creating multiple resources could be achieved by using a for loop in Python if using Pulumi or by using the copy functionality if using Azure Resource Manager (ARM).\nTo get started with Pulumi, you don\u0026rsquo;t need too much of tooling:\nAzure CLI Pulumi CLI Your development language Framework installed (Python, DotNet,\u0026hellip;) Compared to TerraForm and ARM Templates, Pulumi looks at each IaC concept as a \u0026ldquo;Project\u0026rdquo;; where within a project, you define \u0026ldquo;Stacks\u0026rdquo;. Projects are where we will store all of the code for a particular workload. You can think of a project like a source code repository, if something was going to have itâ€™s own repo, then it should probably be itâ€™s own project.\nYou can think of stacks as different instances of the code within our project, normally with differing configuration. In its simplest form youâ€™d have a single project and a stack per environment (dev, test, prod) for example. There are a number of different patterns that you can adopt.\nA sample Pulumi script could look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 \u0026#34;\u0026#34;\u0026#34;An Azure Python Pulumi program\u0026#34;\u0026#34;\u0026#34; import pulumi from pulumi_azure import core, storage # Create an Azure Resource Group resource_group = core.ResourceGroup(\u0026#39;resource_group\u0026#39;) # Create an Azure resource (Storage Account) account = storage.Account(\u0026#39;storage\u0026#39;, # The location for the storage account will be derived automatically from the resource group. resource_group_name=resource_group.name, account_tier=\u0026#39;Standard\u0026#39;, account_replication_type=\u0026#39;LRS\u0026#39;) # Export the connection string for the storage account pulumi.export(\u0026#39;connection_string\u0026#39;, account.primary_connection_string) Head over to the following link to dive in some sample project practices to get Pulumi up and running for Azure: Pulumi Azure Get-Started as well as the Pulumi Azure Tutorials\nAzure Bicep The last tool I want to highlight here is back to where we started, another Microsoft-owned scenario, similar to ARM Templates, but at the same time also different.\nBicep is a language for declaratively deploying Azure resources. You can use Bicep instead of JSON for developing your Azure Resource Manager templates (ARM templates). Bicep simplifies the authoring experience by:\nproviding concise syntax, better support for code reuse, and improved type safety. Bicep is a domain-specific language (DSL), which means it\u0026rsquo;s designed for a particular scenario or domain. It isn\u0026rsquo;t intended as a general programming language for writing applications.\nThe JSON syntax for creating template can be verbose and require complicated expression. Bicep improves that experience without losing any of the capabilities of a JSON template. It\u0026rsquo;s a transparent abstraction over the JSON for ARM templates. Each Bicep file compiles to a standard ARM template. Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file. There are a few known limitations in the current release.\nTo start with Bicep, install the required tools.\nAfter installing the tools, try the Bicep tutorial. The tutorial series walks you through the structure and capabilities of Bicep. You deploy Bicep files, and convert an ARM template into the equivalent Bicep file.\nTo view equivalent JSON and Bicep files side by side, see the Bicep Playground.\nIf you have an existing ARM template that you would like to convert to Bicep, you can also do that, using this approach.\nBicep offers an easier and more concise syntax when compared to the equivalent JSON. You don\u0026rsquo;t use [\u0026hellip;] expressions. Instead, you directly call functions, and get values from parameters and variables. You give each deployed resource a symbolic name, which makes it easy to reference that resource in your template.\nFor example, the following JSON returns an output value from a resource property:\n1 2 3 4 5 6 \u0026#34;outputs\u0026#34;: { \u0026#34;hostname\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[reference(resourceId(\u0026#39;Microsoft.Network/publicIPAddresses\u0026#39;, variables(\u0026#39;publicIPAddressName\u0026#39;))).dnsSettings.fqdn]\u0026#34; }, } The equivalent output expression in Bicep is easier to write. The following example returns the same property by using the symbolic name publicIP for a resource that is defined within the template:\n1 output hostname string = publicIP.properties.dnsSettings.fqdn For a full comparison of the syntax, see Comparing JSON and Bicep for templates.\nBicep automatically manages dependencies between resources. You can avoid setting dependsOn when the symbolic name of a resource is used in another resource declaration.\nWith Bicep, you can break your project into multiple modules.\nThe structure of the Bicep file is more flexible than the JSON template. You can declare parameters, variables, and outputs anywhere in the file. In JSON, you have to declare all parameters, variables, and outputs within the corresponding sections of the template.\nThe VS Code extension for Bicep offers rich validation and intellisense. For example, you can use the extension\u0026rsquo;s intellisense for getting properties of a resource.\nSummary Infrastructure as Code helps cloud engineers in optimizing the deployment and management of (cloud) infrastructure. Azure provides ARM Templates by nature as the go-to scenario. Other vendors/tools that are popular in the Azure world are Terraform, Pulumi and recently developed Microsoft Bicep.\nI hope you got some more insights on Infrastructure as Code and how template-based deployments can dramatically improve your Azure deployment, or if you want\u0026hellip; Spring Clean up tasks.\nHave a great day, and enjoy the rest of Azure Spring Clean 2021\n/Peter\n","date":"2021-03-24T00:00:00Z","permalink":"/post/the-labyrinth-of-azure-iac/","title":"The labyrinth of Azure Infrastructure as Code Tools - Azure Spring Clean"},{"content":"Hi all,\nI\u0026rsquo;ve deployed me an AKS - Azure Kubernetes Service environment that I use in my Azure training class deliveries almost every week (yes, every AZ-course touches on AKS and Containers\u0026hellip;)\nThe Problem My AKS environment was running fine all this time (a bit over a year), allowing me to rely on existing deployed Kubernetes services, as well as building new services as a live demo. Until this morning, where all of a sudden, my own services didn\u0026rsquo;t start at all, but the kube-system services did. The error message I noticed for this service was ImagePullBackOff and ErrImagePull.\nIf you know a bit about Kubernetes and custom services (= the PODs that are running your containerized workloads), you know they are pulled from a Container Registry, in my case ACR - Azure Container Registry. Which means that in this scenario, there was probably something wrong with the communication between AKS and ACR. And more specifically, the AKS resource (or the Service Principal representing my AKS cluster) not having (no more having\u0026hellip;) the correct permissions to reach ACR. Interesting is that the Kubernetes system containers are still running fine.\n![System_containers_up](../images/screenshot-2021-03-23-2744036e.png) The fix The fix consisted of a few different steps, but all in all, the steps made sense.\nCheck if the current AKS Service Principal was still valid\nAfter facing the problem, it struck me\u0026hellip; an AKS Service Principal is valid for 1 year. Yes, my AKS cluster had been deployed for a bit more than year (405 days). So yes, my SP got expired Although there is a way to renew the lifetime of a Service Principal, I couldn\u0026rsquo;t rely on that mechanism, as it only works for a non-expired-yet SP. Sounds normal to me. (In real life scenarios, you could automate this renewal from Azure Functions or Azure Automation)\nThis left me with the next option, creating a new Service Principal and linking it to the existing AKS Cluster Resource. Let\u0026rsquo;s go for that approach.\nGet the resource ID for the existing AKS cluster As we need to link a new Service Principal to the existing AKS Cluster, let\u0026rsquo;s check the resource ID by running the following:\n1 2 3 SP_ID=$(az aks show --resource-group aksrg --name pdtaks\\ --query servicePrincipalProfile.clientId -o tsv) az ad sp credential list --id $SP_ID --query \u0026#34;[].endDate\u0026#34; -o tsv Copy the output aside as we need it again later on.\nCreate a new Service Principal To manually create a service principal with the Azure CLI, use the az ad sp create-for-rbac command. By default, a Service Principal gets assigned to your subscription with Contributor rights, but this will change anytime soon. To avoid any misusage, add the \u0026ndash;skip-assignment parameter to make sure the SP resource doesn\u0026rsquo;t get any assignments yet:\n1 az ad sp create-for-rbac --skip-assignment --name pdtakssp Copy the output aside as we need it again later on.\nUpdate the AKS Cluster with the new Service Principal Allocate the output of the Service Principal and link it to the variable \u0026ldquo;SP_ID\u0026rdquo;:\n1 SP_ID=f0ef702d-7108-476c-8129-XXXXXXXX Do the same for the Service Principal password: linking it to the variable \u0026ldquo;SP_SECRET\u0026rdquo;:\n1 SP_SECRET=gbKclLRCLy1R4B6SzJ~lNVF5eb5ATvP.9l Followed by the following command, which runs the actual update:\n1 2 3 4 5 6 az aks update-credentials \\ --resource-group aksrg\\ --name pdtaks\\ --reset-service-principal \\ --service-principal $SP_ID \\ --client-secret $SP_SECRET After a few minutes, this process should be completed successfully.\nDefine ACRPush permissions (RBAC) for this new Service Principal The AKS Cluster got updated with the new Service Principal, but this resource cannot connect to the Azure Container Registry yet, as it is lacking the permissions to do so. But this can be fixed as follows (using the Portal approach, although CLI or PS could also do the trick):\nFrom Azure Portal, browse to the Azure Container Registry you want to use Select Access Control (IAM) Select Add Role Assignment Role = ACRPush (Pull would only allow Pulling, Push allows both Pull and Push operations) Assign Access To = User, Group, Principal Select = search for the name of your Service Principal (pdtakssp in my example) Save the changes Validate if the problem got fixed AKS is pretty smart in retrying failed operations (it\u0026rsquo;s an Orchestrator after all ;). So let\u0026rsquo;s check if we fixed the problem.\nBrowse to your AKS Cluster resource Select Services and Ingress All services, system and custom workloads, should be up and running again Awesome, AKS did it! (With a little help from Azure Active Directory)\nLesson Learned When deploying AKS Clusters in Azure, remember they get linked to a Service Principal (or Managed Identity alternatively), which is valid for 1 year, but allows for renewal (extend). If your Service Principal got expired, the fix is in creating a new Service Principal, linking it to the AKS Cluster and specifying AcrPush RBAC permissions for the Container Registry you want to use.\nNow I\u0026rsquo;m going to check on that automatic renewal or at least updating my calendar to renew my Service Principal in time next year.\nTake care for now, feel free to reach out on Twitter or peter @ pdtit dot be for questions.\nthanks, Peter\n","date":"2021-03-23T00:00:00Z","permalink":"/post/renewing-expired-aks-service-principal/","title":"AKS ErrImagePull and ImagePullBackOff on AKS after a year"},{"content":"Hi all,\nI hope you all have great holidays this time around, giving you the opportunity to spend time with your family as well as having the opportunity to learn some new skills, which in my case means learning Blazor, a Framework within the DotNet family, allowing for \u0026ldquo;any-client\u0026rdquo; applications (browser, mobile device).\nMy learning journey involves building a front-end Web App, connecting to a SQL (Azure) database back-end. To make this work, I want to use the SQL Server Entity Framework.\nWhat happened? Besides installing the necessary Nuget Packages within my application, I also need to install the dotnet-ef Entity Framework Tools, initiating the following command:\n1 dotnet tool install --global dotnet-ef which was throwing an error\nWhat to check? Based on the error message and description, there were a few things to validate:\nUsing Preview release features; not valid in my case since I\u0026rsquo;m not using preview release. Unauthorized access to the Nuget Feed; not valid in my case, since I am not using any Package Feed integration; all Nuget packages can be downloaded directly from Nuget.org Mistyped the name of the tool; well, no, it was correct I know I was using DotNet 5.0.1 for my Blazor project, and I know I have the correct SDK and Framework installed on my machine. Let\u0026rsquo;s validate by running\n1 dotnet --version I also installed the different EntityFramework Packages I need (FrameworkCore, Design, SQLServer,\u0026hellip;), and those are also version 5.0.1\nHow to fix this error? This explicit versioning led me to the solution; what if I specify that version for the tool, as somehow recommended as a first thing to check (although that was referring to preview, but hey, let\u0026rsquo;s give it a try\u0026hellip;)\n1 dotnet tool install --global dotnet-ef --version 5.0.1 following by running\n1 dotnet ef which loaded fine this time! Problem solved!\nI guess the root cause of the issue is related to my \u0026ldquo;mixed\u0026rdquo; setup, where I still have the dotnetcore 3.1 on my machine as well, probably confusing the dotnet environment. By explicitly referring to the version you want to use, you can avoid seeing weird error messages.\nthanks, Peter\n","date":"2020-12-27T00:00:00Z","permalink":"/post/dotnet-tool-install-dotnet-ef-failing-with-unauthorized/","title":"Dotnet tool install dotnet-ef failing with unauthorized"},{"content":"Hi all,\nThis is one of the nail biting challenges of our industry, failing in running a straight-forward task. Especially when it is failing\u0026hellip;\nWhat happened? I was creating a pipeline in Azure DevOps to deploy an ARM template for VM setups with VM Extensions. To prep this deployment, the artifacts (DSC scripts) should be copied to Azure Blob Storage, in order for the Azure DevOps build agent to \u0026ldquo;find\u0026rdquo; it. Easy-peasy, there is a predefined task for that, called Azure Blob File Copy\nI defined source (my Azure Repos folder) and target (Blob Container in Azure Storage Account), but saw the copy failing with an interesting error message:\nINFO: Authentication failed, it is either not correct, or expired, or does not have the correct permission RESPONSE Status: 403 This request is not authorized to perform this operation using this permission. Knowing I\u0026rsquo;m running this deployment with a valid Service Principal linked to my Azure DevOps Service Connection which deployed Azure Resources successfully before (including a new Azure Storage Account from within the same Job earlier in the process), I had no immediate idea what was wrong. Also, I have used AzCopy for a while already, without issues.\nWhat to check? Next challenge is, where do I start troubleshooting?\nMy Service Principal only got created last week, so definitely not expired (and working for other Azure deployments);\nMy Service Principal got created with all default settings and scoped to my subscription as Contributor (az ad sp create)(https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli); While you should definitely limit the scope in a production environment, not that important for my demos. And thus also not the blocking factor;\nCheck the differences between a working AzCopy pipeline and the non-working version to try and identify any differences;\nBINGO! Not yet, but at least I found the clue\u0026hellip; in the AzCopy documentation which specifies how to enable Azure Blob storage access using Azure Active Directory instead of the traditional SAS token (A Service Principal is an Azure Active Directory object, therefore I was not looking into SAS tokens anymore\u0026hellip;). Following the link under Option 1 (Azure AD), pointed me to the following updates in AzCOpy:\n- The level of authorization that you need is based on whether you plan to upload files or just download them. - *If you just want to download files, then verify that the Storage Blob Data Reader role has been assigned to your user identity, managed identity, or service principal. - If you want to upload files, then verify that one of these roles has been assigned to your security principal: - Storage Blob Data Contributor - Storage Blob Data Owner Let\u0026rsquo;s give this a try:\nFrom your Azure DevOps Project, select Project Settings / Service Connections\nSelect the Service Connection you use for the given Pipeline deployment and choose Manage Service Principal Roles\nThis opens Azure Active Directory - Access Control from where you can add a role assignment by clicking Add Role Assignment\nFrom the list of roles, select Storage Blob Data Contributor, and search for your Service Principal name in the Select field (if you don\u0026rsquo;t know the exact name of your Service Principal anymore, from Azure DevOps / Service Connections, select \u0026ldquo;Manage Service Principal\u0026rdquo;, which will open your Service Principal blade in Azure Active Directory for this specific object - the name will be visible from there)\nSave your changes. (Note - I\u0026rsquo;m allowing this permission for this Service Principal across the full subscription; in a real-life scenario, it would be enough to allocate this Azure Role scoped to the specific storage account)\nThe result should look about similar to below screenshot:\n![Add Role Assignment](../images/screenshot-2020-12-14-efdc1bd2.jpg) Run the Pipeline again from Azure DevOps, and behold\u0026hellip; a successful run this time :)!\nI hope this helps anyone bumping into the same issue as I did. For me lesson learned is reading the Azure Docs a bit more every now and then, especially when something isn\u0026rsquo;t working right away\u0026hellip;\nthanks, Peter\n","date":"2020-12-14T00:00:00Z","permalink":"/post/azcopy-failing-in-azure-devops/","title":"AzCopy failing in Azure Devops with error ServiceCode=AuthorizationPermissionMismatch"},{"content":"Hey there,\nI\u0026rsquo;ve been doing quite a lot with Docker and the different Azure Container Services offerings like Azure Container Instance and Azure Kubernetes Services.\nAs you probably know, the starting point of a containerized application is the Dockerfile. Look at this like an instruction script, which tells Docker what needs to happen, in order to grab the application source code, compile it and produce the container image.\nBesides the complexity of running containers by itself, I personally think writing a Dockerfile is equally difficult and complex. So I am more than happy to know that Visual Studio (2017 and 2019) comes with some interesting Container Tools. Aside from helping in debugging containerized workloads, providing some interaction with the Docker engine, it also helps in compiling a Dockerfile for you.\nOr at least \u0026ldquo;it pretends\u0026rdquo;\u0026hellip; Read on to find out about my journey, messing around for 2,5 days before I actually got my Docker container working\u0026hellip;\nAdd Docker Support When you run Docker Desktop on the same machine as your Visual Studio development environment, you don\u0026rsquo;t need to do anything. The integration is just there (I honestly never looked into the details how this works, but hey, it is there\u0026hellip;)\nFrom VS2019 Solution Explorer, right click on the Project you want to containerize Select Add\u0026hellip; Notice Docker Support From here, it prompts for the Operating System for the container workload, being Linux or Windows; I guess this depends on the app language you are using though; since, in my case, my sample app is using dotnetcore3.1, which runs on both, I could choose.\nFrom here, it produces the necessary Dockerfile, looking like this:\nAt first glance, all looks good, right? This is what the Dockerfile is doing:\nGrab the ASP.NET 3.1 base container image and specify the work directory as /app, and expose the application on port 80 and 443; this makes total sense, as we are working with a web application here Next, grab the DotnetCore 3.1 SDK base container image and specify the work directory as /src Followed by copying my application source code into it From here, it runs the usual dotnet restore, build, publish Producing a final Docker image, which executes \u0026ldquo;dotnet and start my SimplCommerce.Webhost web application within the Docker Container. At second glance, all still looked good, where I was starting my container (F5). The \u0026ldquo;Build\u0026rdquo; process kicks off, similar to running this for a traditional code-based application, and going through the Docker compile process as expected:\nOnce the build process is done, it switches to the Container Tools view, exposing details about the actual running container (ports, logs,\u0026hellip;) As well as showing a running application in the browser Where it goes wrong I was super excited at this point; I got my web application running, moved it into a working Docker Container, just by going through 3 clicks. Awesome!!\nSince the intention is to run my application outside of Visual Studio debug mode, I was obviously running a second test, by manually starting my container using the Docker command line.\nDocker run -p 2500:80 fastcarcase:dev is the command to kick off my container instance, and all seemed fine when checking the running container state:\nHowever, when browsing to http://localhost:2500, nothing is showing up; also, the container is not providing any logs. Not even when running in interactive mode.\nFrom here, I started redoing a lot of steps, going through about all troubleshooting steps I could find online, starting all over from the application source code, going through the Add\u0026hellip; Docker Support steps once more,\u0026hellip; always resulting in the same. A workable containerized application in Visual Studio debug mode, but not when starting the exact same container manually. Frustrating :)\nThe life saver After trying, trying again, trying once more,\u0026hellip; I thought about why not creating a Dockerfile manually and testing from there. GOOD THOUGH apparently.\nHere is the Dockerfile I came up with, after reading several Docker articles, blog posts and validating several of my other sample applications from earlier demo scenarios I used:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build WORKDIR /app Copy *.sln . COPY . . WORKDIR /app/src/SimplCommerce.WebHost RUN dotnet restore RUN dotnet publish -c Release -o out FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS runtime WORKDIR /app COPY --from=build /app/src/SimplCommerce.WebHost/out ./ ENTRYPOINT [\u0026#34;dotnet\u0026#34;, \u0026#34;SimplCommerce.WebHost.dll\u0026#34;] Technically, this Dockerfile is doing almost the exact same as what the Visual Studio generated one did, using the exact same base images (dotnet sdk and dotnet asp.net), as well as copying all the files, and using \u0026ldquo;dotnet SimplCommerce.Webhost.dll\u0026rdquo; as the starting command when the container starts up.\nSurprisingly, this seemed to work fine! I could start my container on my local machine, but also push it into Azure Container Instance and run it fine, and even tried using it in my Azure Kubernetes cluster. And all was working fine.\nClosing I guess I need to go through the concepts and details of a Dockerfile much more in detail to figure out where the differences were, but at least for now, I can move on with building my next demo scenario. Automating all this using Azure DevOps Pipelines. Which will be for a future blog post I promise\u0026hellip;\nSee you all soon, reach out when you have any questions or comments on this post or on Azure in general,\nCheers, Peter\n","date":"2020-11-22T00:00:00Z","permalink":"/post/the-weird-case-of-visualstudio2019-dockerfile/","title":"The weird case of the Visual Studio 2019 Dockerfile"},{"content":"Today, Nov 10th, was the official date of the long-announced \u0026ldquo;dotnet5 Framework\u0026rdquo;, and it is described as a major release. Still being new in the developer world myself, I know the basics of ASP.NET 3.7 and 4.5, so I can imagine jumping to a 5.0 release is indeed a big thing.\n.NET 5.0 improvements The biggest improvements announced by the Product Team are:\nMigration-friendly for older .NET versions Production-ready from day 1 of release (thorough-tested for http://www.dot.net and http://www.bing.com websites) Enhanced performance ClickOnce client app publishing Smaller container image footprint Supportability for Windows Arm64 and WebAssembly (Blazor) Support will go into Feb 2022, which seems the release date for DotNet 6.0 LTS\nAnother big deal is the Unification of the DotNet platform; what this means is that the .NET standard and characteristics will be available across different scenarios (mobile apps, web apps, webassembly, desktop apps, IOT,\u0026hellip;) relying on the same set of APIs, tools and languages. While not all has been integrated and unified yet, it\u0026rsquo;s still on the roadmap to become fully unified by version 6.0 in about 18 months from now.\nMore details about the dotnet 5.0 release can be read in the \u0026ldquo;announcement blog:\u0026rdquo;\nDeveloper Environment dependencies Visual Studio 2019 In order to use the .NET 5.0 Framework, an update of Visual Studio 2019 is required. More specifically, it needs to be version 16.8.0; if all is set as default in your IDE, you should get this prompt to upgrade automatically; if this has been disabled, you could launch the upgrade yourself by starting the Visual Studio Installer from within the Visual Studio menu option Tools / Get Tools and Features\u0026hellip;\nVisual Studio for MAC Updating to the \u0026ldquo;latest version\u0026rdquo; of Visual Studio for MAC should bring in support for .NET 5.0;\nVisual Studio Code Integration of .NET 5.0 into VIsual Studio Code is managed out of the \u0026ldquo;C# Extension\u0026rdquo;, so if you update this one to the latest version, you are good to go too.\nCreating your first .NET 5.0 Project in Visual Studio Now the prerequirements have been covered, let\u0026rsquo;s give it a try and build a new ASP.NET Web Application:\nFrom the Visual Studio 2019 menu, select File / New / Project\u0026hellip;\nFrom the list of templates, select \u0026ldquo;ASP.NET Core Web Application\u0026rdquo;\nPress Create; in the next step, from the top, select .NET Core and ASP.NET Core 5.0 Choose ASP.NET Core Web App as template + confirm by pressing the Create button. Wait for the project to load.\nFrom Solution Explorer, select the Project you just created (the bold title), and open its Properties; this will also confirm the .NET 5.0 Framework\nPublishing your Web App to Azure App Services Developing an app is one thing, but what\u0026rsquo;s giving more joy than seeing it running in Azure? Here we go:\n(Assumptions: you have an active Azure subscription, and the necessary RBAC permissions to create and deploy App Services\u0026hellip;)\nFrom Solution Explorer / select your Project (the bold title), right click to open the context menu, and select publish From the Publish wizard Target step, select Azure; click Next From the wizard\u0026rsquo;s Specific Target step, select Azure App Service (Linux); click Next From the wizard\u0026rsquo;s App Service step, Click the + sign to **create a new Azure App Service - provide a **unique** name for the webapp, using lowercase characters - specify a new for a **new Resource Group** - specify a new **App Service Plan** for example S1 - 1.75Gb Memory Validate all the settings, and confirm by pressing Finish From the summary page, press Publish; This starts the publishing process. Wait for it to complete successfully. The process can be viewed from the Output window After waiting another few seconds, your default browser opens the Web App URL, and shows the web app running\nLet\u0026rsquo;s validate the App Service Configuration settings from within the Azure Portal:\nLog on to https://portal.azure.com using your Azure Admin Credentials Browse to App Services Notice the App Service you just created Browse to this App Service\u0026rsquo;s Configuration (under Settings) Notice the correct .NET 5.0 version Summary In this article, you got introduced to the new .NET 5.0 Framework. I walked you through the Project setup in VIsual Studio 2019 for an ASP.NET Core 5.0 based web application, followed by publishing this to a new Azure App Service resource.\nAs always, I hope you learned from this article; ping me whenever you got any (Azure) questions.\nTake care, Peter\n","date":"2020-11-10T00:00:00Z","permalink":"/post/publish-your-first-dotnet5-app-to-azure-app-services/","title":"publish your first dotnet5 app to Azure App Services"},{"content":"Hey everyone,\nWow, it\u0026rsquo;s September already! Where has summer been? And honestly, where has the (nice) year been? Most of you, just like myself, are probably still working from home, and I guess the same are your kids.\nWhere September used to feel like a fresh start (remember that smell of your new school outfit, the new school equipment, fresh-sharpened pencils, the excitement of learning new things, meeting new class mates,\u0026hellip; almost seems like I want to go back to school myself.\nTrust me, I live this moment almost every single week as an Azure Technical Trainer, feeling the excitement from the attendees to learn knew things about Azure, getting their solutions validated or hearing how there problems can be solved by using Azure cloud services. I even feel how nervous they are about taking a Microsoft Certification and become Azure certified.\nAzure Back To School So to stay within the \u0026ldquo;spirit of continuous learning\u0026rdquo;, I was more than happy to contribute to Dwayne Natwick\u0026rsquo;s event Azure Back to School,\nand present a session on \u0026ldquo;Azure DevOps is not for IT Pro (says no one ever again)\u0026rdquo;. Why was I so stupid enough to try and cram this amazing tool in a 30 minute session? I still don\u0026rsquo;t know why :)\nAnyway, during this session, I\u0026rsquo;m scratching the surface of Azure DevOps capabilities:\nAzure Devops Repos - GIT-compatible source/version control Azure DevOps Boards - Following on Scrum, Agile and alike development project management, this component provides work items, bug tracking,\u0026hellip; including graphical boards Azure Pipelines - Probably the core of the solution, allowing you to enable a complete CI/CD (continuous integration/continuous deployment) scenario using YAML or Classic Editor, and deploy our templates/workloads to Azure (or other platforms) The goal in my 30 min session was to just show you the core capabilities, to at least get you looking into it. If I can get you creating your Azure DevOps Organization and creating your first Azure DevOps project\u0026hellip; I have reached my goal :).\nEverything else will follow in future blog posts here at 007FFFLearning.com, no worries.\nThat said, if you made it all the way through this post, and you cannot wait to get started with Azure DevOps, send me a DM on Twitter or leave me an email with the exact subject \u0026ldquo;@zure DevOps - Back To Sch00l\u0026rdquo; , and I\u0026rsquo;ll return you a little gift, allowing you to jump into Azure DevOps right away\u0026hellip;!\nOne last thing, there is a lot more cool and interesting Azure stuff to be found on Azure Back to School** for the rest of the month, so grab that opportunity to keep your internal learning daemon happy ;).\nSee you around! Take care,\nthanks, Peter\n","date":"2020-09-13T00:00:00Z","permalink":"/post/azure-back-to-school/","title":"Azure Back To School (with Azure DevOps)"},{"content":"Hi all,\nEarlier this week, I got blown away by an interesting Terraform issue.\nThe Problem Running a deployment that ran fine for months. I initiated a new deployment using the usual \u0026ldquo;terraform init\u0026rdquo; step, which ran fine. Followed by the usual \u0026ldquo;terraform plan\u0026rdquo;, and BOOM, the following message appeared:\nPanic: not a collection type\nSince this was a new template I created, I assumed an issue with the syntax or anything similar. As I couldn\u0026rsquo;t find anything, I tried running the same steps with a \u0026ldquo;valid\u0026rdquo; template. Only to find out it produced the same error message.\nSince I was running this in Azure Cloud Shell, I thought next this could be related to the Azure Cloud Shell Azure CLI version and or the Terraform version within.\nTo get the version of Terraform, run the following:\n1 terraform version OK, cool, I was running version 0.13.1, which, based on what I know, was only published recently. Interesting though, there was also a note about my version being outdated, and I needed to upgrade to 0.13.2\nAFAIK, I\u0026rsquo;ve been running my deployments fine over the last few weeks, even this Wednesday during an Azure training demo. So something else must be going on. Digging in some more, I found this issue in the Hashicorp GitHub repository, having a reference to the createEmtpyBlocks, last updated only 3 days ago\u0026hellip; hmmmm\u0026hellip; let\u0026rsquo;s have a look\u0026hellip;\nhttps://github.com/hashicorp/terraform/pull/26028\nThe Solution So apparently the Terraform 0.13.1 got published recently, causing some issues, and now getting/being replaced with the 0.13.2 as a fix for this (amongst other issues, based on a broader Google search)\nLet\u0026rsquo;s give that a try; but wait, I\u0026rsquo;m running Terraform as part of Azure Cloud Shell, so I probably have to wait for an update getting integrated into the Shell, if that is even happening automatically (something I\u0026rsquo;ll search for later).\nSince Cloud Shell is a stripped down Linux, I could only try and treat it like a Linux VM, running the following:\n1 2 3 4 5 6 curl -O https://releases.hashicorp.com/terraform/0.13.2/terraform_0.13.2_linux_386.zip \\ \u0026amp;\u0026amp; unzip terraform_0.13.2_linux_386.zip \\ \u0026amp;\u0026amp; mkdir TF0132\\ \u0026amp;\u0026amp; mv terraform TF0132/ cd /TF0132 Since the current Terraform 0.13.1 is part of the default Cloud Shell PATH, it will run from any location you are in; to \u0026ldquo;force\u0026rdquo; Cloud Shell to use the \u0026ldquo;the newer Terraform 0.13.2\u0026rdquo;, one could launch it directly from \u0026quot;/TF132/.terraform\u0026quot; , for example:\n./terraform init ./terraform plan Resulting in a successful Terraform deployment again :)\nHopefully this gets picked up by Azure Cloud Shell soon enough, so I can get rid of this \u0026ldquo;temporary\u0026rdquo; workaround. On the other side, actually quite cool to run these versions side by side. Who knows what other bugs I detect while running 0.13.1, although I\u0026rsquo;m rather sure I will default to the 0.13.2 from now on.\nI hope this helps anyone having the same issue as I did,\nthanks, Peter\n","date":"2020-09-06T00:00:00Z","permalink":"/post/terraform-panic-error/","title":"Terraform showing an error Panic not a collection type"},{"content":"Hey,\nAs some of you know me for years already, you also know Iâ€™m rather bad in coding (although I\u0026rsquo;m learning Blazor since a few weeks, but more on that journey in future posts\u0026hellip;), and always â€œgot scaredâ€ about DevOps. Especially the Dev part in the word ðŸ˜Š. I still remember being in Bangalore, India early 2016 to deliver an Azure workshop to Microsoft GSI Partners (Wipro, Tech Mahindra, Accenture,\u0026hellip;) as a freelance trainer together with Microsoft AzureCAT engineers. While my sessions were totally in my Azure comfort zone (Networking, Storage, Azure Active Directory), I also learned a lot from the Azure App sessions my colleagues delivered. But then all of a sudden, I had to jump in on Wednesday afternoon, taking over the â€œDevOps practicesâ€ session due to some urgent facts. Oh man, down went my comfort level. Me, the guy they saw earlier in the week, delivering Azure Infra sessions with ease, now needing to talk about DevOps, something I didnâ€™t know at all? I hardly used Visual Studio at that time, let alone I could talk about this process.\nSo instead of delivering the foreseen session (DevOps processes, Visual Studio integration with source control, CI/CD Pipelines,\u0026hellip;), I just talked about my personal perspective of DevOps, how ARM templates and Azure Automation was â€œthe DEV partâ€ in my world, helped me combining my +15 years background in building Microsoft datacenters at customers â€œthe OPS guyâ€, and do amazing things with it. All in all, the talk got very much appreciated, as it was personal.\nI already know Azure DevOps, so why bother? Jumping 5 years further in time, I have been using Azure DevOps as a tooling myself for about 2 years now, and also delivered several successful Azure AZ-400 (https://docs.microsoft.com/en-us/learn/certifications/exams/az-400) workshops out of my Microsoft role as Azure Technical Trainer. And then I stumbled into Emily Freemanâ€™s (https://twitter.com/editingemily) Devops for Dummies book (https://www.amazon.com/DevOps-Dummies-Computer-Tech/dp/1119552222).\nAt first, I wasnâ€™t really interested in getting me a copy, given it was â€œfor Dummiesâ€. And remembering what other â€œfor Dummiesâ€ I read in the past, it was always on something I didnâ€™t know at all, yet learned a lot from it. So I gave it a chance â€“ also because the Kindle edition is only $15,99 USD â€“ thatâ€™s like 3 Starbucks coffees ðŸ˜Š. And wow, I was hooked to it from the first chapter.\nWhat got me hooked? First of all, Emily is a subject matter expert, period. Iâ€™ve seen her presenting both in-person and at virtual conferences, and I enjoy learning from her stories. She manages to combine technical expertise with understandable explanations, calm presentation style, funny twists,\u0026hellip; everything you look for in a presenter. And next to that, passion for the topic. It is clear Emily can look back to a huge amount of in-the-field experience on helping customers implementing (or transforming for that matter) DevOps best practices into their IT lifecycle management.\nIt was this natural flow of words, the logic of chapters, how the whole book is getting build up from basics to more complex scenarios and everything in between, that makes it worthwhile for anyone in IT to read it. This book is not about Azure DevOps just to be clear. It focuses on the processes, the challenges, how to get from where you are today to where you can be in the (near) future as an organization, and also nicely describes the real-life challenges that comes with it.\nNext, every couple of pages, she makes a nice annotation to an interesting quote, additional side-reference material, a funny story around DevOps,\u0026hellip; that makes it still â€œlight to digestâ€, yet still hugely informational.\nAbout the book structure The book is about 300 pages long, split up in 6 chapters:\nDemystifying Devops â€“ this is one of the best 50 pages I know describing what DevOps REALLY is. Whatâ€™s good about, where are the challenges, why/how around colleagues not liking it and how to convince them,\u0026hellip; Establishing a Pipeline â€“ From planning to software lifecycle management, and how to get there, also explaining how your development code itself should take DevOps into account Connecting the circuit â€“ this section covers how to handle and get feedback, manage iterations, success and failure of the process Kaizen â€“ Not about a Japanese Hero, but how to continuously improve your processes, learn from mistakes and optimize the business overall DevOps Tools â€“ A nice overview of different devops tools, useful for private and public cloud platforms Top 20 â€“ 10 reasons where DevOps is crucial, and another 10 reasons where it can fail What I remember from it â€œDevOps is approachable and realâ€, is one of the best quotes Aside from that, I took a lot of notes around processes, how to inspire people to start embracing DevOps practices, the challenges that comes with it, but mainly how it can help any organization to transform their current mode of operations and software lifecycle into an optimized, well-oiled flow, where IT is becoming a business driver, no longer a cost center or cost generator.\nI honestly wish I had this book 5 years ago, showing me there is nothing scary or anything to be afraid of when considering DevOps. Itâ€™s about people, processes and products. Itâ€™s not about being a full-stack developer or being the expert datacenter specialist. Itâ€™s how both worlds can nicely work together. (Which in these difficult times of COVID-19 and racial disturbance could actually be baby-steps in the positive direction)\nPing me if you got any questions on the book, DevOps or Azure DevOps in general.\nCheers, Peter\n","date":"2020-08-22T00:00:00Z","permalink":"/post/devops-for-dummies-review/","title":"DevOps for Dummies - Emily Freeman - Review"},{"content":"Hi there,\nIf you wonder why it\u0026rsquo;s been a bit quite on the blog front, it\u0026rsquo;s because I enjoyed 2 weeks off from work, distracting a bit from Azure, technology and spending some needed family time. Bonus was that these 2 weeks probably had the nicest weather of the whole year (although my wife would argue on that, since she is more into snow and cold, not the humid hot sweaty weather we had).\nI spent my days reading books (yes, tech related, more on that in future blog posts\u0026hellip;), watching videos, catching up on archived Tweet posts,\u0026hellip; so the relaxing life you could say. We managed to book a last-minute trip to Canary Wharf, London for a couple of days, so that helped in having a good time as well. And that\u0026rsquo;s where I actually found me another challenge, walking\u0026hellip; yes, walking. You might think what\u0026rsquo;s the big deal about that, but knowing I\u0026rsquo;m delivering Azure workshops 4 or 5 days a week for the last few years, it means I sit at my desk about 10 hours a day. Next to that, I used to enjoy walking from my hotel room to the customer\u0026rsquo;s office to deliver training (where possible), but due to COVID19 and the whole work from home, the only walking I\u0026rsquo;m doing is going up and down the (22) stairs; so you can\u0026rsquo;t really say I\u0026rsquo;ve been exercising a lot.\nThe Conqueror Back to my London walking\u0026hellip; the process actually started earlier, where I discovered \u0026ldquo;The Conqueror\u0026rdquo;, out of a Facebook advertisement.\nThey offer a virtual journey, stimulating people to walk (or run, or bicycle,\u0026hellip; doing distance activity basically), almost like hiking or taking a trail passage. Along the trip, you get virtual postcards by email, and when completing the journey, you receive a certificate to print and a metal medal with a cool design.\nMy Virtual Mission So a few weeks, I already tried to push myself into walking a bit more as part of my Work from Home story, but I have to admit, although I activated my first \u0026ldquo;challenge\u0026rdquo;, I didn\u0026rsquo;t do much walking :(.\nHowever, for some reason, and without \u0026ldquo;pushing\u0026rdquo; myself, we did of walking during the London stay; some days was only 2-3 miles, some other days were close to 8-9 miles. And again, this probably isn\u0026rsquo;t that much in reality, but it actually gave a good feeling to know I (and at the same time my family with me) could do this. So I started entering my achievements in the challenge tool.\nIn between each progress, they provide you some media gadgets to share on social media, showing how you are doing (I\u0026rsquo;m not that social media minded to share it though\u0026hellip;or at least not yet, maybe in the future when I keep going\u0026hellip;)\nand eventually one that shows the completion certificate upon finishing\nThat\u0026rsquo;s how I managed to complete 2 challenges, \u0026ldquo;English Channel - 21 miles\u0026rdquo;\nand \u0026ldquo;The Inca Trail - 26,2 miles\u0026rdquo;\nWhat\u0026rsquo;s next Personally, my biggest challenge now lies ahead of me, with the usual work stream starting again tomorrow for the next following weeks. I need to push myself and do a \u0026ldquo;daily walk\u0026rdquo;. Reaching my 1-3 miles as a start, maybe moving up to some longer walks before the training day starts, or when the training day is finished. Mixing this with enough family-time and other things to look after might be though in the beginning. Luckily I got my family to support me in this, as they know I want to succeed in this.\nIf my story inspired you to join a similar challenge, let me know when you signed up and we can influence each other from a virtual team. For now, I\u0026rsquo;m going to enjoy my little victory, dreaming of longer walks in the near future.\nCheers, Peter\n","date":"2020-08-16T00:00:00Z","permalink":"/post/i-found-me-another-challenge/","title":"I found me another challenge: The Conqueror"},{"content":"As most of you know, I enjoy writing technical (Azure related) books, but every now and then I am not writing myself, but rather do technical reviewing. A few weeks ago, I was approached by Packt, asking me to review their Azure for Architects - third edition\nDon\u0026rsquo;t let the reference to \u0026ldquo;third edition\u0026rdquo; fool you, there has been a massive rewrite of several chapters, with fresh new content, more technical information and new chapters were added as well.\nAs technical reviewer, I mainly take on the responsibility of making sure the content is technically accurate. This involves not only the textual paragraphs and descriptions, but also the reference to any hands-on step-by-step guidance as well. While this book is targeted to cloud architects, it is not just covering the high-level capability of several Azure services, but also takes the reader onto a journey about different use cases, how different services relate to each other and more. While not specifically written for it, I can tell you this work is a decent preparation for the Azure Solutions Architect Expert exams\nAbout the book Thanks to its support for high availability, scalability, security, performance, and disaster recovery, Azure has been widely adopted to create and deploy different types of application with ease. Updated for the latest developments, this third edition of Azure for Architects helps you get to grips with the core concepts of designing serverless architecture, including containers, Kubernetes deployments, and big data solutions.\nYou\u0026rsquo;ll learn how to architect solutions such as serverless functions, you\u0026rsquo;ll discover deployment patterns for containers and Kubernetes, and you\u0026rsquo;ll explore large-scale big data processing using Spark and Databricks. As you advance, you\u0026rsquo;ll implement DevOps using Azure DevOps, work with intelligent solutions using Azure Cognitive Services, and integrate security, high availability, and scalability into each solution. Finally, you\u0026rsquo;ll delve into Azure security concepts such as OAuth, OpenConnect, and managed identities.\nBy the end of this book, you\u0026rsquo;ll have gained the confidence to design intelligent Azure solutions based on containers and serverless functions.\nTable of Contents Getting started with Azure Azure solution availability, scalability, and monitoring Design patternâ€“ Networks, storage, messaging, and events Automating architecture on Azure Designing policies, locks, and tags for Azure deployments Cost Management for Azure solutions Azure OLTP solutions Architecting secure applications on Azure Azure Big Data solutions Serverless in Azure â€“ Working with Azure Functions Azure solutions using Azure Logic Apps, Event Grid, and Functions Azure Big Data eventing solutions Integrating Azure DevOps Architecting Azure Kubernetes solutions Cross-subscription deployments using ARM templates ARM template modular design and implementation Designing IoT Solutions Azure Synapse Analytics for architects Architecting intelligent solutions Good for almost 700 pages of deep-technical content!\nMy feedback I have to be honest, doing technical reviewing of this book was hard for me. Being an author myself, and mainly on the exact same topics, I had to get over the fact that I was not the one writing the book. While this seems easy, it actually was harder than I initially thought. Each author has a certain writing style, starting already from the outline. (In this case, it means I might have switched the chapters in a slightly different order).\nKnowing that each module is stand-alone, you can easily mix and match the order to your relevance. Whether you want to learn about a specific topic, maybe grab several chapters to get a clear idea about a broader solution (like containers and Kubernetes), or want to go through the book from beginning to end page after page, anyone interested in learning about Azure will find what he/she is looking for. Another thing I noticed, after going through most of the chapters, was the extensive background in data solutions the author(ing team) had - really, those chapters were super detailed and I learned a lot from them myself - especially on the newer data topics from Azure Synapse (Chapter 18). This was quite nice to go through, since it\u0026rsquo;s still rather new.\nIf you want to get a more clear view on serverless, I can definitely recommend chapters 10 and 11, both from a technical perspective as well as the promised architect overview.\nWhile there is nothing wrong with a 700 page book, and again, each chapter is somehow a stand-alone one, I sometimes wonder if anyone is actually capable of going through this huge amount of information. I have been \u0026ldquo;living in Azure\u0026rdquo; for almost 7 years full-time now, and at moments, it even felt heavy to me. Let alone if you are less familiar with a lot of the services. But on the other hand, this also means it could become the \u0026ldquo;go to\u0026rdquo; reference for Azure content. and knowing this is the third edition, I hope the Packt editor team also keeps this in mind, making sure the book is getting refreshed and updated frequently, like at least once a year (like it happened up till now already)\nLast, I also like the fact a lot of code snippets are publicly available on GitHub, especially useful for finding the PowerShell Scripts, Azure Resource Manager templates or Azure CLI used throughout the book. Even if after some time the Azure Portal might change, capabilities and features of the described services might (and guaranteed they will!) change, I hope the authors are also keeping this repo up-to-date.\nFeel free to reach out if you got any more questions on this book or its content. Unfortunately I don\u0026rsquo;t have access to discount codes or free copies, if that would be your first ask :). However, knowing this \u0026ldquo;Azure bible\u0026rdquo; is listed for $34,99 (ebook) and $49,99 (printed+ebook), this is really a lot of value for your money if you ask me.\nStay safe and healthy you all!\n/Peter\n","date":"2020-08-02T00:00:00Z","permalink":"/post/azure-for-architects---reviewing-done/","title":"Another Tech Reviewing done: Azure For Architects - 3rd edition"},{"content":"Hey there,\nAt the recent DockerCon (virtual) Conference, Docker announced a more tightened partnership with Microsoft, boosting the adoption and integration of Docker containers for Windows Server as well as Azure-running workloads. A first announcement involved a cool integration with Azure Container Instance (ACI), a low-level container runtime on Azure, allowing you to run a container without the typical complexity. While ACI has been around for 2 or more years already, it now becomes possible to manage and run your ACI-based containers directly from the Docker commandline.\nAnd that\u0026rsquo;s exactly what I will guide you through in this post.\nPrerequisites This capability is in preview for now, and requiring Docker Desktop Edge 2.3.2 (I\u0026rsquo;ll show you how to upgrade if you already Run Docker) An Azure subscription, allowing you to deploy and run Azure Container Instance A sample Docker container (you can grab my example if you want, or use any other you like) Upgrading to Edge Desktop I was already running Docker Desktop on my Windows 10 machine, using Windows Subsystem for Linux (WSL) integration; I actually wrote another post on this a few weeks ago, on how to get this up and running.\nFrom the Docker icon in the taskbar, select \u0026ldquo;About Docker Desktop\u0026rdquo;; this will show you the current version As you can see, I\u0026rsquo;m using the stable version 2.3.0.3\nSince I keep my demo containers in Docker Hub, I wasn\u0026rsquo;t too worried about losing them. However, if you want to keep a backup of your current Docker images, know you can store these in a Linux-tar file, using 1 docker save -o \u0026lt;nameforbackup.tar\u0026gt; \u0026lt;docker_image_name\u0026gt; Uninstall Docker Desktop, by searching for \u0026ldquo;Docker Desktop\u0026rdquo; in the Start Menu, right-click it and select \u0026ldquo;Uninstall\u0026rdquo; Follow the instructions to have the software removed from your machine.\nFrom the Docker website, download the Docker Desktop Edge edition Accept the options to create a desktop shortcut and allow the integration with WSL (if that is what you were using before\u0026hellip;)\nWait for the component install to complete\nAfter only a few minutes, Docker Desktop should run again fine. You can validate this from the Docker icon in the taskbar\u0026rsquo;s notification area; if it shouldn\u0026rsquo;t start automatically, you can start it from here as well, by right-clicking on it. (FYI, I actually had to restart my machine before it actually ran fine, but I am on Windows 10 Insider Preview 19640, if that should matter at all :))\nConfirm Docker Desktop Edge is running fine from a Docker Perspective , by opening your Command Prompt, and running 1 Docker info Nice, that upgrade went smooth already!\nBefore we move on to the next step, let\u0026rsquo;s restore our previously used Docker Image (if you created the backup), by running the following command:\n1 Docker load -i \u0026lt;name_of_the_backupimage\u0026gt;.tar and validate by running\n1 docker images On to the next step\u0026hellip; Now we are running the latest Docker Desktop Edge, it is time to play around with the newest Azure Container Instance (ACI) integration - which is the whole point of this blog post.\nIn short, you go through the following steps:\nAuthenticate to Azure, directly from Docker connect Docker to Azure Container Instance by creating a \u0026ldquo;Docker Context\u0026rdquo; (think of this as an environment with its own settings, much like dev/test, staging, production. Or in our case, the \u0026ldquo;default context\u0026rdquo; being your local machine running Docker, and the other one being \u0026ldquo;Azure\u0026rdquo;) Allocate a Docker Hub image to run as an Azure Container Instance, and run it Authenticate to Azure, directly from Docker The first feature that is part of the Docker Desktop Edge, is allowing us to authenticate to Azure, directly from the Docker engine. Initiate the following command:\n1 Docker Login Azure This will prompt you for your Azure subscription credentials in a browser, just like a regular Azure authentication prompt (this also recognizes MFA, to make this a rather secure option)\nCreating a Docker Context From your Command Prompt, create a new Docker Context, by running the following command:\n1 docker context create aci \u0026lt;name_for_the_context\u0026gt; Based on the authenticated logon from the previous step, it will list up the different Azure subscriptions linked to your account; using the \u0026ldquo;arrow\u0026rdquo; keys, you can select the subscription you want to use. Next, it will list up the different Resource Group within your subscription.\nIf you don\u0026rsquo;t want to use an existing Resource Group, you can create a new one:\nWhile this works, the naming convention for the newly created Resource Group is probably not going to work in any organization (naming convention policies etc\u0026hellip;); so let\u0026rsquo;s run this command again, and create a new context, based on an already existing Resource Group we want to use, by running the following command:\n1 docker context create aci \u0026lt;name_for_the_context\u0026gt; --location \u0026lt;azure_region_name\u0026gt; --resource-group \u0026lt;name_of_the_Azure_RG\u0026gt; --subscription \u0026lt;name_of_the_Azure_subscription\u0026gt; The Docker Context, pointing to Azure ACI is available now. Let\u0026rsquo;s continue with running an actual container in the next step.\nRunning your ACI using a Docker Hub image Running a Docker container within the ACI instance, is based on the exact same Docker command you would use if it was running on your local machine:\n1 docker run -d -p \u0026lt;portmapping\u0026gt; \u0026lt;name_of_the_container_image\u0026gt; which looks like this for my example:\nwhere\n80:80 tells the container to run the workload on port 80, and expose it to the outside world on port 80 as well pdetender/simplcommerce points to an e-commerce application container I have available in my Docker Hub repository At first, a new unique name for the Docker container runtime will get created (the \u0026ldquo;trusting-cartright\u0026rdquo;) in my example, followed by the deployment of a new Azure Container Instance\nIn less than a minute, the job is completed successfully. Time to validate the running container. This - again - is identical to validating your running Docker container instances on your local machine:\n1 docker ps which shows you the running container instance, as well as the necessary details about the public-IP address of the instance. From your browser, connect to this public IP address, and see our sample workload in action:\nYou can also validate this from the Azure Portal, by connecting to the Azure Container Instance (this could also be done from Azure CLI or Azure PowerShell to be complete\u0026hellip;)\nWonderful! This new Docker Edge integration with ACI is a nice improvement, and saving several steps from the \u0026ldquo;old way\u0026rdquo;.\nThis completes the core of what I wanted to discuss in this post, showing you the nice capabilities from Docker Desktop Edge, to natively deploy and run an Azure Container Instance.\nRunning your ACI using an Azure Container Registry (ACR) image \u0026lt;these steps are not required anymore as part of the process, but just wanted to do some additional testing :)\u0026gt;\nThe previous example was using a public Docker Hub container image. So I was wondering if this would also work for a (private) Docker image I already have in my Azure Container Registry. Let\u0026rsquo;s give it a try:\ncreate a new Docker Context for ACI Run the Docker Container, pointing to the Azure Container Registry image Hmm, that\u0026rsquo;s an interesting error message\u0026hellip; something \u0026ldquo;gcloud\u0026rdquo; related (=Google Cloud Platform :)). After some searching on the interwebs, it seems like my Docker instance is having some default authentication providers in its config.json file\u0026hellip; interesting\nApparently it is safe to remove the section \u0026ldquo;CredHelpers\u0026rdquo;, saving the file and running the \u0026ldquo;Docker Run\u0026rdquo; again:\nWhile that weird gcloud error is gone, we are not quite there yet. But this error makes more sense to me. What it says here, is that the Docker Context cannot connect to the Azure Container Registry. Of course not, I need to authenticate to ACR first (az acr login), just like when I am running this locally on my machine:\nwhere -g refers to the name of the Resource Group having the Azure Container Registry, and -n refers to the name of the Azure Container Registry itself\nwhich works much better now; similar to the first example, a new Azure Container Instance is getting deployed:\nLet\u0026rsquo;s validate once more by initiating \u0026ldquo;docker ps\u0026rdquo;, which shows the following:\nand checking from the browser, if the workload is actually showing what it needs to show (note it is the same workload, just a different product category):\nand lastly, checking back on what it looks like from the Azure Portal\nI love this!!\nSummary In this post, I introduced you to a brand new capability from Docker Desktop Edge, providing a direct (native almost) integration with Azure Container Instance. This allows you to deploy and run a container instance on Azure, without much hassle. I showed you how this works with public Docker Hub images, as well as with more private images from an Azure Container Registry.\n","date":"2020-07-05T00:00:00Z","permalink":"/post/use_docker_edge_to_deploy_aci/","title":"Use Docker Edge to Deploy Azure Container Instance - ACI"},{"content":"During several of my AZ-400 Designing and Implementing Microsoft DevOps Solutions training deliveries, one recurring point of conversation is Should we use YAML or the Classic Designer for our Release pipelines?\nSo I thought sharing my view in another blog post could be helpful.\nBefore answering the question more accurately, let\u0026rsquo;s go over each scenario a bit more in detail:\nClassic Designer Classic Designer has been the long-standing approach on how Azure DevOps Pipelines have been created. Using a user-friendly graphical User Interface, one can add tasks to create a pipeline just by searching for them from a list of tasks, and complete necessary parameters.\nBelow example is what I use for building Docker Containers:\nAs you can see, this looks quite straight forward to anyone, even if you are totally new to Azure DevOps.\nIf I want to update my pipeline with another task, for example Docker CLI Installer, I just click on add task and search for all \u0026ldquo;Docker\u0026rdquo; related tasks from the list,\nand select the related task I want:\nOnce you are familiar with the actual steps on how to build and compile containers from a command line, moving the manual steps to an Azure Pipeline are almost 100% the same. In the end, you are literally automating your manual approach.\nYAML Now, let\u0026rsquo;s take a look at the YAML (Yet Another MarkUp Language) approach. There is no graphical designer here, but rather a text config file you need to build up, describing the different steps you want to run as part of your Azure Release Pipeline.\nYAML got introduced into Azure DevOps mid 2018 already, but I still see a lot of customers not using it that often yet.\nUsing a similar example as before, the YAML file looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 pool: name: Azure Pipelines steps: - task: qetza.replacetokens.replacetokens-task.replacetokens@3 displayName: \u0026#39;Replace tokens in appsettings.json\u0026#39; inputs: rootDirectory: \u0026#39;$(build.sourcesdirectory)/src/MyHealth.Web\u0026#39; targetFiles: appsettings.json escapeType: none tokenPrefix: \u0026#39;__\u0026#39; tokenSuffix: \u0026#39;__\u0026#39; - task: qetza.replacetokens.replacetokens-task.replacetokens@3 displayName: \u0026#39;Replace tokens in mhc-aks.yaml\u0026#39; inputs: targetFiles: \u0026#39;mhc-aks.yaml\u0026#39; escapeType: none tokenPrefix: \u0026#39;__\u0026#39; tokenSuffix: \u0026#39;__\u0026#39; - task: DockerInstaller@0 displayName: \u0026#39;Install Docker 17.09.0-ce\u0026#39; - task: DockerCompose@0 displayName: \u0026#39;Run services\u0026#39; inputs: dockerComposeFile: \u0026#39;docker-compose.ci.build.yml\u0026#39; action: \u0026#39;Run services\u0026#39; detached: false - task: DockerCompose@0 displayName: \u0026#39;Build services\u0026#39; inputs: dockerComposeFile: \u0026#39;docker-compose.yml\u0026#39; dockerComposeFileArgs: \u0026#39;DOCKER_BUILD_SOURCE=\u0026#39; action: \u0026#39;Build services\u0026#39; additionalImageTags: \u0026#39;$(Build.BuildId)\u0026#39; - task: DockerCompose@0 displayName: \u0026#39;Push services\u0026#39; inputs: dockerComposeFile: \u0026#39;docker-compose.yml\u0026#39; dockerComposeFileArgs: \u0026#39;DOCKER_BUILD_SOURCE=\u0026#39; action: \u0026#39;Push services\u0026#39; additionalImageTags: \u0026#39;$(Build.BuildId)\u0026#39; - task: DockerCompose@0 displayName: \u0026#39;Lock services\u0026#39; inputs: dockerComposeFile: \u0026#39;docker-compose.yml\u0026#39; dockerComposeFileArgs: \u0026#39;DOCKER_BUILD_SOURCE=\u0026#39; action: \u0026#39;Lock services\u0026#39; - task: CopyFiles@2 displayName: \u0026#39;Copy Files\u0026#39; inputs: Contents: | **/mhc-aks.yaml **/*.dacpac TargetFolder: \u0026#39;$(Build.ArtifactStagingDirectory)\u0026#39; - task: PublishBuildArtifacts@1 displayName: \u0026#39;Publish Artifact\u0026#39; inputs: ArtifactName: deploy That\u0026rsquo;s rather different, isn\u0026rsquo;t?\nEach individual task I just had to select before, now requires a set of instructions in a config file. Luckily, Azure DevOps still provides a graphical interface to pick the tasks, which could be helpful in the beginning, when YAML is too new to you.\nGood news is, from a Build Pipeline perspective, both methods provide the same result. So the key question is, which one to go for?\nClassic Designer or YAML After discussing on this topic with several students during my deliveries, I came up with a good and bad list for each. Know this is far from complete, not trying to push you in a certain direction at all, but merely providing an overview.\nAdvantages of Classic Editor Ease of Use Clear overview of what tasks the pipelines is based on Lots of preconfigured task snippets available No \u0026ldquo;development\u0026rdquo; language to learn Disadvantages of Classic Editor Less obvious Source Control/Version Control Specific to Azure DevOps Slow to create or update your pipelines Microsoft-native While not immediately, it will phase out at some point Advantages of YAML 100% code-based, which means you can manage it like your application source code in source/version control Easy to make changes (once you know how the language works) Easier to compare changes (e.g. Azure Repos \u0026ldquo;file compare feature\u0026rdquo;) Code snippets can be shared easily with colleagues, much easier than screenshots Same YAML concept is used by Docker, Kubernetes,\u0026hellip; and several other \u0026ldquo;configuration as code\u0026rdquo; tools View YAML option in Classic Editor to see the snippet of GUI task translated to YAML Disadvantages of YAML Scary at first, especially if you are not a developer Harder to learn the \u0026ldquo;language\u0026rdquo;, when you are used to using the Graphical UI Summary Again, this list is probably far from complete, and mostly depends on your personal preferences. For me, I still see myself going often to the Classic Editor rather than using YAML, but I also try to change my behavior :). Knowing YAML is somehow becoming a standard in other tools and platforms (think of Docker, Kubernetes,\u0026hellip;), it makes total sense to also adopt this into Azure DevOps. Next, there is a tendency to move to a anything as code (Infrastructure as Code, Configuration as Code, now Pipelines as Code,\u0026hellip;) which allows for easier creation, change, version control and collaboration across teams. And isn\u0026rsquo;t that the ultimate idea about DevOps after all?\nPing me on Twitter or send me an Email if you want to share your feedback on this.\nStay safe and healthy you all!\n/Peter\n","date":"2020-06-21T00:00:00Z","permalink":"/post/azure_devops-pipelines-yaml_or_classic_designer/","title":"Azure DevOps Pipelines - YAML or Classic Designer"},{"content":"Last Friday, I delivered an online session at the \u0026ldquo;Azure Day Rome 2020\u0026rdquo;, titled \u0026ldquo;Azure is 100% high available, or is it?\u0026rdquo;\nSince the conference sessions were only 45min, I didn\u0026rsquo;t have much time to drill down on all the details, but apparently I managed to provide a clear and easy overview of several misconceptions around public cloud high-availability, and even more important, how Azure provides several services and architectures, to optimize the overall high availability of your workloads. Whether you deploy IAAS, PAAS or Serverless.\nAlready during the session, as well as afterwards, I started getting emails or social media messages from attendees, asking if I could \u0026ldquo;confirm\u0026rdquo; their current architectures, or recommend any changes to their existing deployments.\nWhat better inspiration to have for another blog post, right?\nMeasuring SLA\u0026rsquo;s High availability is expressed in a Service Level Agreement (SLA), determining how much seconds, minutes or hours (potential) downtime is to be faced. The common ones are\n99.9% (\u0026ldquo;the 3 nines\u0026rdquo;), 99.99% (\u0026ldquo;the 4 nines\u0026rdquo;) and 99.999% (\u0026ldquo;the fives nines\u0026rdquo;) but these are not the only ones.\nHere is a summarized view of the common ones there are across different Azure services:\nNote: Azure Services SLA\u0026rsquo;s are always referring to the monthly numbers\nAs you can already learn from this table, Azure (just like any other public cloud, as well as on-premises datacenters for that matter\u0026hellip;) is not providing 100% SLA.\nBut this is not what the session was about obviously. What\u0026rsquo;s more important, is how to achieve the ultimate SLA in Azure, for different architectures available today.\nAzure Regions A first level of redundancy one can make use of, are the different Azure Regions, available across the globe. Technically, one can decide on any regions you want to use for your cloud-running workloads (exceptions are US Government, China and \u0026ldquo;local clouds\u0026rdquo;), and boosting the high availability from a regional perspective. Easy said, instead of deploying your workload (Azure Virtual Machine, Azure Web App, Azure Functions,\u0026hellip;) in a single region, deploy it in at least 2 or more regions.\nLuckily not too frequent, but every now and then, an Azure region becomes totally unavailable. Historical situations were mostly related to weather conditions (heavy storms in San Antonio Tx, 2018) or human mistakes (releasing faulty patch to Azure storage fabric in multiple regions at once, 2019).\nTo get a clear view on what the status is of any given Azure region, as well as all services running within that region, have a look at Azure Status\nAzure Networking Azure Virtual Networking is the cornerstone of a lot of Azure services in IAAS, PAAS and Serverless. This means those services won\u0026rsquo;t be able to run, if the underlying network stack is having issues. But outside of that, there are also a few services within the Azure Networking Provider, helping you to optimize the SLA of your non-networking related services. I\u0026rsquo;m talking about Load Balancers.\nAzure provides 4 different load balancing services:\nAzure Load Balancer Azure Application Gateway Traffic Manager Azure Front Door While I will definitely dedicate additional blog posts for these, let me summarize the core characteristics of each:\nAzure Load Balancer and Azure App Gateway Azure Load Balancer is a layer4 (transport) load balancer, capable of load balancing any IP traffic to defined endpoints (Virtual Machines, App Services, DB Services,\u0026hellip;), and can be set up as an internal-only (no public IP) or external-only (only public IP) configuration. (more info can be found at Azure Load Balancer)\nAzure Load Balancer guarantees a 99.95% SLA\nAzure App Gateway is a layer 7 (application) load balancer, capable of load balancing HTTP and HTTPS traffic only, to defined endpoints (Virtual Machine Web Server, App Services), and can be configured as internal or external. Major difference to the former Azure Load Balancer, is that it only recognizes web traffic, but on top of that, it also comes web-traffic specific features like SSL offloading, session affinity, URL redirection and WAF - Web Application Firewall (more info can be found at Azure App Gateway)\nAzure App Gateway guarantees a 99.99% SLA\nNote: both load balancing solutions are active in a single-region topology, which means it can only act as a load balancer for workloads running in the same region as the load balancers\nTraffic Manager and Azure Front Door Traffic Manager is a DNS-based load balancer, allowing for load balancing traffic across multiple \u0026ldquo;sites\u0026rdquo;, which could be multiple Azure regions, but also across Azure and a non-Azure region (on-premises, other public cloud,\u0026hellip;). It provides several different load balancing mechanisms like round robin, priority, geographical or high-availability. (more info can be found at Traffic Manager)\nAzure Traffic Manager guarantees a 99.99% SLA\nAzure Front Door Azure Front Door is much similar to Azure App Gateway, as it comes with a lot of identical features, but runs as a global Azure service. This means it is not limited to a specific region, but rather is deployed for load-balancing web traffic across multiple regions. Where a region can be an Azure region, or any other public endpoint (other public clouds, on-premises public-internet facing web applications,\u0026hellip;) (more info can be found at Azure Front Door)\nAzure Front Door guarantees a 99.99% SLA\nAzure Virtual Machines Probably one of the most looked for SLA\u0026rsquo;s is running Virtual Machine-based workloads on Azure. Since these are closest to the traditional on-premises datacenter architecture, it is most familiar. Azure Virtual Machines can be deployed in 3 different architectures, each providing a different SLA:\nSingle Virtual Machine When you deploy a single Virtual Machine instance in an Azure Region, using (the default) premium managed disks, it provides an SLA of 99.9%. This is OK, but not always recommended for a production-running scenario. The underlying infrastructure is a physical server, running in a physical rack in an Azure region\u0026rsquo;s datacenter.\nAzure Availability Sets To optimize the SLA of Virtual Machines, the next option is an Availability Set, moving up the SLA to 99.95%; An Availability Set concept is a bit of Azure intelligence, by which you deploy at least 2 or more instances of an identical VM setup. Each VM is guaranteed running on a different physical server, in a different physical rack in the same Azure Region. In case of downtime (planned or unplanned) of a physical rack (defined as a Fault Domain) or any of its components, this will obviously bring down the running Virtual Machine. But the other instance(s) are not impacted by it, optimizing the high-availability.\nPotential downside of an Availability Set, is it is still bound to the same building. So the multiple Virtual Machine instances you have running across different physical racks won\u0026rsquo;t help much in case of a full building impact.\nThat\u0026rsquo;s where you can choose for another setup, using Availability Zones.\nAzure Availability Zones Availability Zones are the ultimate architecture when you look for the best high-availability for your business-critical Virtual Machine workloads. Besides running high available across multiple physical racks (similar to AVSets before), the physical racks are also spread across multiple buildings. In case of a complete building outage, your instance(s) will still be available in any of the other buildings. But still within the same region. Look here for additional info on Availability Zones\nNote: to reach a \u0026ldquo;close to 100%\u0026rdquo; Virtual Machine high availability, one should consider deploying your VM-workloads in Availability Zones across multiple Azure Regions; keep in mind though this might drive the cost up dramatically, as each VM instance incorporates full consumption)\nAzure App Services Azure App Services is the \u0026ldquo;umbrella terminology\u0026rdquo; for different Platform as a Service (PAAS) services like Web Apps, API Apps, Mobile Apps, Logic Apps and Azure Functions. In terms of high availability, it is different for each service within this classification:\nLogic Apps comes with a 99.9% SLA Web Apps, API Apps and Mobile Apps are covering a 99.95% SLA, as well as for Azure Functions EventGrid guarantees a 99.99% SLA, which is interesting, as it mainly relies on any of the other App Services for its functioning Azure Container Services Last, let me touch on the SLA for different Azure Container Services.\nThe core services related to Azure Containers are:\nAzure Container Registry (99.9%) Azure Container Instance (99.9%) Azure Kubernetes Service (99.95%) Given the popularity and business-criticality of containers these days, I am personally a bit surprised to see these rather low numbers. On the other hand, knowing containers are typically running for a short period of time, the impact could be quite low. Azure Container Registry is mainly following the SLA\u0026rsquo;s from it\u0026rsquo;s underlying Azure storage service, where Azure Kubernetes Service is relying on Azure Availability Sets.\nSummary This blog post only gives a short overview of different Azure services architectures, and what different SLA\u0026rsquo;s they offer within each service. Ultimately, your high-availability architecture for any given workload should probably combine different of these services. For example, if you deploy multiple VM instances as part of Availability Zones, you would still need to add some load balancing solution next to it, to guarantee the high-availability of the workload itself.\nGiven the complexity for specific scenarios, I guess I can come up with a few concrete examples from customers, and what architectural designs I used to give you some ideas about what and how to architect.\nHowever, keep the following in mind:\nStay safe and healthy you all!\n/Peter\n","date":"2020-06-14T00:00:00Z","permalink":"/post/azure_is_100_high_available/","title":"Azure is 100% High Available!! or is it...?"},{"content":"Hi again,\nIn about every Azure training delivered the last few months, I am talking about Docker and Azure Kubernetes Services - AKS\nAlong these months, the amount of \u0026ldquo;sample PODS\u0026rdquo; I am running within the Kubernetes cluster was continuously growing, resulting in a less efficient demo scenario to show.\nSo cleaning up these running PODS was my 5 seconds action this Saturday morning. While not super hard, it actually took me a bit longer than 5 seconds (more like 10min :)), since I forgot a few \u0026ldquo;basics\u0026rdquo; on how Kubernetes is running PODS.\nTo safe myself some time in the future, and even more, helping readers from making the same mistake, I took note of it:\nThe Before Situation I am not discussing how to deploy AKS on Azure, there is already enough documented on how to achieve this using the Azure Portal as well as using Azure CLI to do this.\nDeploying PODS (=your Docker containerized application) to the Kubernetes cluster is done using a \u0026ldquo;Kubernetes.YAML\u0026rdquo; file, having settings on the application name, the amount of container replicas you want to run within the cluster for high availability, and the link to the Azure Container Registry where the container image can be found.\nA sample such kubernetes.yml looks like this:\nSome important settings in this file are:\nmetadata / name this is the name of the deployment (important for later\u0026hellip;!) (agderkubdemo in my example) template / app this is the name of the application within the AKS cluster (agderkubdemo in my example) containers / name name of the Azure (or other or Public Docker Hub) Container Registry containers / image name of the Azure (or other or Public Docker hub Container Repository (=name of your Docker container image))\nOnce you have this file, you can run the following command to get the PODS deployed to your AKS cluster:\n1 kubectl apply -f \u0026lt;path to the kubernetes.yml file\u0026gt; So that\u0026rsquo;s what I currently had, a running AKS server with a couple of tens of these sample app containers running :)\nHow to delete PODS from AKS If you want to know what PODS you are actually running on your AKS cluster, run the following command:\n1 kubectl get pods which looks similar to what I have in my environment:\nEasy enough, there is a kubectl command to delete PODS, go figure:\n1 kubectl delete PODS \u0026lt;name of the POD\u0026gt; which nicely deletes the identified POD\nor did it?\nApparently the PODS were not really getting deleted in the way I wanted them to be completely removed from the cluster. My \u0026ldquo;active\u0026rdquo; PODS turned to a state \u0026ldquo;terminating\u0026rdquo;, but at the same time, there were 2 new PODS running the same application. What\u0026rsquo;s going on?\nAfter a few seconds, it struck me what AKS was doing here\u0026hellip; The built-in high availability of Kubernetes always tries to make sure it has container instances running, according to\u0026hellip; what you defined in your deployment (=the kubernetes.yml file).\nLet\u0026rsquo;s check that file again:\nI had my specs / replicas set to \u0026ldquo;3\u0026rdquo;, which means Kubernetes runs 3 identical container instances of my application (for high availability). So in reality, when you run the delete action against a replica, AKS just starts up new instances, to comply to the 3 running instances you ask for.\nSo there must be another way to run the deletion.\nOne source I found on the internet recommended to set the replica parameter to \u0026ldquo;0\u0026rdquo;, but that felt a bit weird to me (although I actually tried and succeeded).\nHowever, the best practice seems to be deleting the actual deployment. Remember I pointed this out earlier, this setting is in the \u0026ldquo;kubernetes.yml\u0026rdquo; file as well, saying this setting was important\nmetadata / name this is the name of the deployment (important for later\u0026hellip;!) (agderkubdemo in my example)\nWithin Kubernetes, when you run a \u0026ldquo;kubectl apply\u0026rdquo; action, it remembers this state as a deployment. So by removing this deployment, it will also remove the corresponding PODS. Let\u0026rsquo;s give that a try:\n1 kubectl delete deployment \u0026lt;deployment name\u0026gt; (=from the metadata / name setting in the YML file...) Or you could also use parameter \u0026ldquo;\u0026ndash;all\u0026rdquo; as follows, to delete all previous deployments at once:\n1 kubectl delete deployment --all If we know check what happens with the running PODS, they will all be nicely terminated, and eventually getting deleted from the AKS environment:\n1 kubectl get pods Summary This post described how you can successfully delete running PODS from an AKS environment, using different scenarios.\nSee you all soon, reach out when you have any questions on AKS or Azure in general,\nCheers, Peter\n","date":"2020-06-06T00:00:00Z","permalink":"/post/how-to-delete-an-aks-pod/","title":"How to delete a POD from Azure Kubernetes Services (AKS)"},{"content":"Hey there,\nWith the May update of Windows 10 (named Windows 2004 :) ) being available since this week, together with DockerCon virtual conference, I think it was the right time to (finally) migrate my current Docker Desktop in Hyper-V mode to the new WSL 2 (Windows Subsystem for Linux).\nIn short, the process was smooth, straight forward, and not having any real impact on my \u0026ldquo;demo environment\u0026rdquo; I\u0026rsquo;m using continuously during my Azure training workshops and public speaking gigs.\nThis also only took about 20min of my time, including writing this blog post. LOL.\nHere we go:\nPrerequisites The main prerequisite I want to highlight here is you need the May 2020 update for your Windows 10 machine; If you don\u0026rsquo;t have it yet, here is a quick how-to-install the May 2020 Update: Go to Settings / Update \u0026amp; Security / Windows Update. Here, select \u0026ldquo;Check for updates\u0026rdquo;.\nOnce the update is listed, select Download and install. (If you don\u0026rsquo;t see the notification to download and install, the update may not have been published yet for your machine/region, but you should receive it any time soon. I mean it\u0026rsquo;s the May update :) (actually was the April update, but due to COVID19 got pushed out a bit). Also make sure you currently are running Windows 10 version 1903 or version 1909.)\nOnce the download is finished and ready to install, you\u0026rsquo;ll get a notification to choose the right time to finish the installation and reboot your computer. I actually ran this automated over night, and it welcomed me this morning.\nWhy not required for my Docker and WSL 2 upgrade, I was suprised there is No Edge browser included with this release, so that\u0026rsquo;s the first application I updated\u0026hellip;\nInstalling WSL 2 WSL (Windows Subsystem for Linux) was released almost 3 years ago, and got recently upgraded to v2, as part of the Windows 10 May update. It provides an almost full Linux distri (Ubuntu, openSUSE, Kali, Debian,\u0026hellip;). WSL 2 comes with incredible performance improvements, nicer integration for mixed Windows/Linux platform developers (did somebody say dotnet core?) and also provides Docker support. If you were running the WSL v1 already, you don\u0026rsquo;t have to do anything but will get a notification from within the WSL environment to upgrade to WSL 2. I wasn\u0026rsquo;t running WSL yet, so went through the following steps, per the Microsoft documentation:\nOpen a PowerShell session as Administrator, and run the following cmdlet to install the WSL feature: 1 dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart Next, install the Virtual Machine (Hypervisor) feature, by running the following cmdlet: 1 dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart Enable WSL 2 as default by running the following cmdlet: 1 wsl --set-default-version 2 You can now install your Linux distri of choice, by launching the Microsoft Store App on your Windows 10 machine. I selected Ubuntu, but know you have a few other ones available as well Click Install Wait for the install to complete, and press launch to start the Ubuntu environment Give it a few minutes to finalize the Ubuntu installation within the WSL environment. You are also prompted for a Linux local administrative username and password (this can - and SHOULD - be different from your Windows local admin account credentials for security reasons\u0026hellip;!) After a few moments, your Ubuntu environment is up-and-running. Again, this replaces any former Ubuntu virtual machine you had running on Hyper-V, Virtualbox, VMware Player,\u0026hellip; on the same Windows 10 machine of yours. Keep in mind you cannot install any \u0026ldquo;GUI\u0026rdquo; applications inside the WSL environment, but can use any commandline-based application. It\u0026rsquo;s a full Linux distri remember!!\nSwitching Docker to WSL 2 My setup here involved a \u0026ldquo;migration\u0026rdquo; from Docker using it\u0026rsquo;s own Moby Hyper-V VM to WSL 2; this means I\u0026rsquo;m losing the current Linux containers I already use within my Docker environment. If you want to reuse them within the WSL environment, make sure you get a list of them before switching the Docker mode, by running the following cmdlet (PowerShell or CMD Prompt):\n1 docker images which provides you an overview of the (Linux) Docker images you currently have on your machine\nI only have a few left, since I did a nice cleanup before (docker rmi )\nFrom the Docker Desktop context menu / Settings / enable the \u0026ldquo;Use the WSL 2 based engine\u0026rdquo; While not really needed, it\u0026rsquo;s always nice to validate this is actually working fine; the first check I did is executing a \u0026ldquo;Docker info\u0026rdquo; command, which shows the running state of the Docker engine, while at the same time validating the former Docker Moby VM is down - obviousy this was the case: We can now download and run our former Docker images again, to have the same setup as before; on my machine, I had a few images available, like \u0026ldquo;Ubuntu\u0026rdquo;, \u0026ldquo;SimplCommerce\u0026rdquo; (an e-commerce app I use in workload demos,\u0026hellip;); let\u0026rsquo;s grab these by executing a \u0026ldquo;Docker run \u0026rdquo; command: for my pdetender/simplcommerce (on Docker Hub) and\nfor a sample Ubuntu container; awesome! it works!\nUpdating (or installing) Visual Studio Code - Docker Extension Managing Docker is all commandline based, and it\u0026rsquo;s not always that convenient to remember all commands during live demos. And even during day-to-day operations, I tend to make my life a bit easier, if there is a GUI available for \u0026ldquo;easy tasks\u0026rdquo;. That\u0026rsquo;s where VSCode extensions are powerful. Including the Docker one; if you haven\u0026rsquo;t installed it yet, please do so :).\nRight after my upgrade to WSL 2 above, it got picked up by VSCode immediately, showing me the following notification: which I obviously installed, ending up in (yet another) extension:\nNext, I validated my version of the Docker extension, if it was updated to the latest one (if you installed this extension already, it typically runs a silent update by itself\u0026hellip;) Which allows us to manage our Docker environment from the VSCode GUI now: and guaranteeing again for some nice upcoming demos during my Azure workshops!\nSummary In today\u0026rsquo;s post, I walked you through an upgrade (or installation\u0026hellip;) of a Docker Desktop on Windows 10 from the Moby VM Hyper-V setup to the latest WSL 2, thanks to an upgrade in Windows 10 May 2020 update build. While I only did a few quick functional tests, making sure my environment is still running as before, I have a slight feeling this WSL 2 is going to be used much more, and not just for my Docker integration.\nPing me if you got any questions!\n/Peter\n","date":"2020-05-30T00:00:00Z","permalink":"/post/migrate-docker-desktop-to-wsl2/","title":"Migrating Docker Desktop to WSL2"},{"content":"Only about a month ago, I decided to move my former website (running on Wix) to the Open Source Hugo platform, running it as a static website with MarkDown, using Azure Storage Static Website. For more details on how to do this, have a look at my blog post here\nI have to say, it runs fine, is cheap, fast, reliable,\u0026hellip;\nBut then I discovered the new Azure Static Web App capability, as announced during //Build conference earlier this week, so I wanted to give it a try.\nAnd instead of starting from scratch, why not reusing the Hugo content I already have?\n**Azure Static Web Site allows you to run JavaScript-based static web apps; Technically, Hugo does the same thing; by creating your blog post in MarkDown, and running \u0026ldquo;Hugo\u0026rdquo;, it somehow compiles your new blog post, images,\u0026hellip; into a static HTML page. It\u0026rsquo;s this page and corresponding images (if any) that get uploaded to a \u0026ldquo;/public\u0026rdquo; folder. (Same thing happens on Azure Storage Static Site btw). So this was the mechanism I wanted to try.\nOne major difference between Azure Storage Static Site and the new Azure Static Web Site, is its dependency on GitHub. Yes, publishing your static site content is possible from a GitHub Actions CI Pipeline. While this also worked for the Azure Storage approach, you could actually just copy the compiled HTML files using AzCopy or Azure Storage Explorer.\nCreate a GitHub Repo The starting point is having a GitHub Repo available, which contains our content. Again, since I already had all this, this process was quickly done. Make sure you remember your GitHub credentials giving access to the repo you want to use, as this will be asked for during the Static Web Site deployment.\nDeploy Azure Static Web Site From the Azure Portal, select New Resource and search for Static Web Site (Preview); Click Create Complete the different parameters, required for the resource deployment: Subscription = Your Azure Subscription Resource Group = New or existing Resource Group where you want to create the resource Name = Provide a name for the static web site (Note this doesn\u0026rsquo;t need to be a unique name like with a regular Azure App Service) Region = Close-by region where you want to host the site (note only a handful of Azure Regions are supported for now, but will probably grow) SKU = Free is the only option for now Next, you are asked for your GitHub Credentials Provide your GitHub credentials, and accept the application authorization; this will allow for the integration with GitHub Actions CI Pipeline later.\nOnce the GitHub authorization is confirmed, you can complete the Source Control parameters: Organization = your GitHub account organization Repository = select the GitHub Repo containing the sample Hugo website (in my example, this is github.com/pdtit/hugotest1 - feel free to Fork) Branch = the Repo branch (typically master, but could be different) Click the \u0026ldquo;Next:Build\u0026rdquo; button, to move on to the next step in the resource creation process; here, you point to the actual site folder containing the site content. In case of Hugo, this is typically the \u0026quot;/public\u0026quot; folder from your local Hugo development location Note you only have to complete the App Location parameter, and leave the other 2 empty\nComplete the process by clicking the \u0026ldquo;Review \u0026amp; Create\u0026rdquo; button. When all looks OK, confirm by pressing the \u0026ldquo;Create\u0026rdquo; button. Wait for the Azure resource to get created; this shouldn\u0026rsquo;t take that long. Once the resource is created, select \u0026ldquo;Go to Resource\u0026rdquo; from the notification popup appearing; this will redirect you to the actual Static Site resource that just got created. Notice the unique URL that got created for this specific site. Notice the blue ribbon informing you about not having any content of the site yet, and pointing to GitHub Actions. Click on the blue ribbon to get redirected to the GitHub Actions. Notice an \u0026ldquo;Azure Static Web Apps CI/CD\u0026rdquo; Action is automatically created, and running (orange color); Give it a few minutes to complete (green color). If you want to see more details about the CI/CD pipeline itself, select the pipeline; this will show the Build and Deploy Job status, exposing details for each and every step in the build process Verify the Static Site is running Only thing left to do is validating if the website is actually running. TO do this, go back to the Azure Portal, and click on the URL of the Static Site\nThis brings up your browser and nicely shows the Hugo website. Notice this is an Azure Namespace URL for now, but feel free to continue the configuration by checking on the Custom Domains option.\nWhile this is still in preview, I\u0026rsquo;m pretty convinced this will soon become a very popular service. I know I\u0026rsquo;ll keep using it already!\nAs always, reach out when having any questions, or feel free to share feedback using my social media links.\nFound this article useful? Consider supporting my blog\n/Peter\n","date":"2020-05-21T00:00:00Z","permalink":"/post/running-your-hugo-site-on-azure-static-webapps/","title":"Running your Hugo site on Azure Static WebApps (Preview)"},{"content":"Hi,\nFor almost 2 years, I have been using the \u0026ldquo;Office 365 Outlook Connector\u0026rdquo; as part of my Logic Apps flows, to send emails internally and externally. Mainly for external receivers, I used the \u0026ldquo;is HTML\u0026rdquo; parameter for the body of the email.\nThis weekend, I was building a new Logic Apps flow, and to my surprise, finding out that the \u0026ldquo;V2\u0026rdquo; of this same connector / action, doesn\u0026rsquo;t have that parameter anymore.\nEven more surprisingly, the HTML code I was using before in the body of my email, doesn\u0026rsquo;t even get recognized as HTML (would have been nice if this was just magically built-in now\u0026hellip; no?), but just sending the raw HTML code as body content. Weird\u0026hellip;\nMore important though, is I found a way to fix this, relying on the \u0026ldquo;Variables\u0026rdquo; Connector I used in the past to read and pass on text during my flows from one step to another. Maybe this could work for HTML text as well?\nAdd a step before the \u0026ldquo;Send Email\u0026rdquo; step you already have in your workflow, and search for \u0026ldquo;Variables\u0026rdquo; as connector type. Select \u0026ldquo;Initialize Variable\u0026rdquo; as action providing the following parameters:\nName: emailbody or something similarly descriptive Type: String Value: leave empty Next, add a new step, again selecting the \u0026ldquo;Variables\u0026rdquo; connector, but this time going for the \u0026ldquo;Set Variable\u0026rdquo; action Providing the following parameters:\nName: emailbody or what you used as Name in the initialize step Value: this is where you paste in the actual HTML code of your email content Next, select the Send Email V2 action from the \u0026ldquo;Office 365 Outlook Connector\u0026rdquo;, defining the \u0026ldquo;set variable\u0026rdquo; variable, as body of the email resulting in the following configuration:\nWhen running your Logic App flow again, you will notice that the email you receive will again be in the expected nicely-looking HTML layout as we had before:\nI have no idea why that \u0026ldquo;is HTML\u0026rdquo; setting has been removed from the Office 365 Outlook Connector, but glad to know we still have a work-around available to achieve the same result. On the other side, was I that wrong assuming the body layout should recognize HTML by default now? As in anybody still sending emails that are not in HTML layout?\nStay safe and healthy you all!\n/Peter\n","date":"2020-05-10T00:00:00Z","permalink":"/post/email-is-html-gone-logicapps/","title":"is HTML parameter gone from Logic Apps Send Email Connector"},{"content":"A few days ago, I blogged about Bing Desktop Wallpapers, a nice little tool you can install on your Windows Machine to enjoy some of the amazing views of the world.\nKnowing these got stored on your local machine as JPEGs (C:\\Users\u0026lt;user\u0026gt;\\AppData\\Local\\Microsoft\\BingWallpaperApp\\WPImages), got me the idea to reuse these as \u0026ldquo;Teams Backgrounds\u0026rdquo; during video calls.\nAs the Bing Wallpaper gets updated every day, it would be nice to have a different image in Teams\u0026hellip; every day. So instead of trying to remember to copy a Bing Wallpaper image, why not using a Windows Scheduled Task for this, based on a little PowerShell script?\nThe script could look like this:\n\u0026gt; $today = get-date -format \u0026quot;yyyyMMdd\u0026quot; \u0026gt; \u0026gt;$source = \u0026quot;C:\\Users\\petender\\AppData\\Local\\Microsoft\\BingWallpaperApp\\WPImages\u0026quot; \u0026gt; \u0026gt;$target = \u0026quot;C:\\Users\\petender\\AppData\\Roaming\\Microsoft\\Teams\\Backgrounds\\Uploads\u0026quot; \u0026gt; \u0026gt;$targetfile = $today+\u0026quot;.jpg\u0026quot; \u0026gt; \u0026gt;copy-item -path \u0026quot;$source\\$targetfile\u0026quot; -Destination $target -Force Save this file with a PS1 (PowerShell Script) extension, e.g. \u0026ldquo;copybingtoteams.ps1\u0026rdquo;, and store it on your local machine (I used my Documents folder for this).\nYou could try and run this script manually to try it out, and see how nicely the \u0026ldquo;today\u0026rsquo;s\u0026rdquo; Bing Wallpaper gets copied to the Teams folder\nSo that works!\nNext, to make this an automated step every morning when logging on to our machine, let\u0026rsquo;s use the Windows Task Scheduler as follows:\nFrom the Start Menu, search for Task Scheduler Once the console is open, Right click Task Scheduler Library, and select \u0026ldquo;Create Task\u0026rdquo; From the \u0026ldquo;General\u0026rdquo; tab, provide a descriptive name for your task e.g. \u0026ldquo;Copy Bing Wallpaper to Teams\u0026rdquo;, and keep the default setting to only use this when user is logged on. From the Triggers tab, create a new trigger, and specify the time you want to launch this script (e.g. 7am in the morning), and specify to run this every 1 day Under the Actions tab, is where we define the actual script to run.Set \u0026ldquo;Run a Program\u0026rdquo; as Action, and browse to the location where you saved the PS1-file. Nothing special to configure in the Conditions tab settings, although I did turn off the dependency to start this task only when connected to AC Power. I didn\u0026rsquo;t make any changes to the Settings tab, so we are good to go to Save our settings. If you want to validate the task is going to run fine, you can manually launch it from the Task Scheduler console.\nThat\u0026rsquo;s it. Enjoy your new daily Bing Wallpaper in your Teams video calls!\nHave a great day you all! and stay healhty!\n/Peter\n","date":"2020-05-10T00:00:00Z","permalink":"/post/use-bing-wallpaper-as-background-in-teams/","title":"Use Bing Desktop Wallpapers as background images in Teams calls"},{"content":"Welcome back!\nAbout a month ago, I decided to move my former 007FFFLearning website from Wix.com to something \u0026ldquo;easier\u0026rdquo; to use for blog writing. While Wix is an excellent platform, offering an easy way to build a graphical website, besides a ton of plug-ins, it also comes with a cost. And since I didn\u0026rsquo;t have a need for a lot of the built-in features I was using out of my own business before I joined Microsoft, I wanted to try something new.\nThat \u0026ldquo;something new\u0026rdquo; eventually was Hugo, an Open Source web platform, supporting HTML and MarkDown. After a quick test, I found it also worked fine on Azure Storage Static Site, which to me was the motivation to give it a try.\n(FYI, have a look here about how to get your own Hugo website started, and hosting it on Azure including Azure CDN or Azure Front Door services\u0026hellip;)\nHaving a platform for blog posts is one thing, getting your hands on statistics around popular posts and overall web site visits is maybe even more crucial if you want to take blogging serious. Since the core backend of my Hugo site is running on Azure, I wanted to integrate Azure Application Insights, since I knew how powerful it was for monitoring web application workloads, running in Azure or elsewhere.\n1. Deploying Application Insights The first step is straight forward if you already have Azure experience. From the Azure Portal, create a new resource, and search for \u0026ldquo;Application Insights\u0026rdquo;. Complete the necessary parameters to get this resource created:\nSubscription Resource Group Name Azure Region Wait for the resource to get created. Once created, open the Application Insight blade:\n2. Integrating Instrumentation Key script into the Hugo website From the top right corner, it will show you the \u0026ldquo;Instrumentation Key\u0026rdquo;, which is the unique identifier for this Application Insights instance. This must be linked to each and every web page we publish on our website, to transfer telemetry information back to the App Insights back-end. The way to do this is adding a little JavaScript script into the header of the index.html, and is described more in detail at the following Azure docs link: While this sounds like a tremendous job, Hugo actually makes this rather easy. Although you create each standalone post (or other item) as a single MarkDown file, during the \u0026ldquo;rendering process\u0026rdquo;, Hugo compiles this into a static index.html for each post (or other item). This is based on gluing different snippets of the layout together.\nIn my Hugo theme, I found out that using the \u0026ldquo;head.html\u0026rdquo; file in the root of my layout folder \u0026ldquo;(\u0026lt;Hugo_source_folder\\themes\u0026lt;themename\u0026gt;\\layouts\\partials)\u0026rdquo; would to the trick.\nOpen this file in a text editor like VS Code, and browse all the way to the end of the section. Paste in the following lines (as shown on the Azure Docs page), replacing the INSTRUMENTATION_KEY with the one you find in the Application Insights Overview section: 1 2 3 4 5 6 7 \u0026lt;script type=\u0026#34;text/javascript\u0026#34;\u0026gt; var sdkInstance=\u0026#34;appInsightsSDK\u0026#34;;window[sdkInstance]=\u0026#34;appInsights\u0026#34;;var aiName=window[sdkInstance],aisdk=window[aiName]||function(e){function n(e){t[e]=function(){var n=arguments;t.queue.push(function(){t[e].apply(t,n)})}}var t={config:e};t.initialize=!0;var i=document,a=window;setTimeout(function(){var n=i.createElement(\u0026#34;script\u0026#34;);n.src=e.url||\u0026#34;https://az416426.vo.msecnd.net/scripts/b/ai.2.min.js\u0026#34;,i.getElementsByTagName(\u0026#34;script\u0026#34;)[0].parentNode.appendChild(n)});try{t.cookie=i.cookie}catch(e){}t.queue=[],t.version=2;for(var r=[\u0026#34;Event\u0026#34;,\u0026#34;PageView\u0026#34;,\u0026#34;Exception\u0026#34;,\u0026#34;Trace\u0026#34;,\u0026#34;DependencyData\u0026#34;,\u0026#34;Metric\u0026#34;,\u0026#34;PageViewPerformance\u0026#34;];r.length;)n(\u0026#34;track\u0026#34;+r.pop());n(\u0026#34;startTrackPage\u0026#34;),n(\u0026#34;stopTrackPage\u0026#34;);var s=\u0026#34;Track\u0026#34;+r[0];if(n(\u0026#34;start\u0026#34;+s),n(\u0026#34;stop\u0026#34;+s),n(\u0026#34;setAuthenticatedUserContext\u0026#34;),n(\u0026#34;clearAuthenticatedUserContext\u0026#34;),n(\u0026#34;flush\u0026#34;),!(!0===e.disableExceptionTracking||e.extensionConfig\u0026amp;\u0026amp;e.extensionConfig.ApplicationInsightsAnalytics\u0026amp;\u0026amp;!0===e.extensionConfig.ApplicationInsightsAnalytics.disableExceptionTracking)){n(\u0026#34;_\u0026#34;+(r=\u0026#34;onerror\u0026#34;));var o=a[r];a[r]=function(e,n,i,a,s){var c=o\u0026amp;\u0026amp;o(e,n,i,a,s);return!0!==c\u0026amp;\u0026amp;t[\u0026#34;_\u0026#34;+r]({message:e,url:n,lineNumber:i,columnNumber:a,error:s}),c},e.autoExceptionInstrumented=!0}return t}( { instrumentationKey:\u0026#34;INSTRUMENTATION_KEY\u0026#34; } );window[aiName]=aisdk,aisdk.queue\u0026amp;\u0026amp;0===aisdk.queue.length\u0026amp;\u0026amp;aisdk.trackPageView({}); \u0026lt;/script\u0026gt; The code within the head.html file should look similar to this now: 3. Render/Compile your Hugo Website From the root folder of the Hugo Website, run hugo to recompile all your content pages (posts and other) into their final index.html files.\nPublish the source code of the website to your Azure Static Site\nBrowse to a few posts on your website, to generate traffic; we will validate this in the next step from within Application Insights\n4. Get statistics from within Application Insights Back in Application Insights in the Azure Portal, browse to Usage and select Users; this will show you a diagram of the user visits for the last 24 hours. You can change the time window if needed by selecting other filters. For example, changing the time window to 7 days, changes the view to this in my example: If you click the \u0026ldquo;View More Insights\u0026rdquo; button, you can see additional statistical details about the site visitors, this time nicely structured by country: Scrolling more down shows yet another summary view, summarized by (active) session: Click on any of the active session details; this opens a sidebar view, exposing more granular information about that session: Back from the Usage menu of Application Insights, select Events This will again show you detailed views on the actual past events for the last 24 hours, 7 days or any other time window you select. You can again click the \u0026ldquo;View More Insights\u0026rdquo; button, to find specific event statistics, which refer to page views of each and every blog post on our website. Click on any of the event statistics items; this redirects you to another section in Application Insights, the \u0026ldquo;End-to-end transaction details\u0026rdquo; From this view, select All available telemetry information for this session, under the Related items section: Which in turn exposes additional details about each and every item viewed during that specific session 4. Summary Application Insights is a powerful (web) application monitoring and troubleshooting tool within Azure, coming with impressive dashboards. This allows for very detailed analytics of web sessions, users, telemetry information and more. While App Insights has a lot more features than what I covered here, it helps in getting a clear view on your Hugo website view statistics.\nAnd that was the core objective of this post.\nI\u0026rsquo;m learning more about Hugo in combination with Azure services, and love it more each and every day. Expect more posts around these subjects in the near future.\nFor now, stay safe and healthy! as always, reach out to share your feedback or ask questions.\n/Peter\n","date":"2020-04-26T00:00:00Z","permalink":"/post/hugo_site_statistics_app_insights/","title":"Collecting Hugo static site statistics using Azure Application Insights"},{"content":"Earlier this week, I had the honor to present a session for the Belgian MC2MC Microsoft community, as part of a broader online event week with other Belgian community User Groups (https://www.mc2mc.be/events/be-community-week-mc2mc-evening).\nMy session topic was based on a little real-life project I worked on myself only a few weeks ago, deploying a â€œHugoâ€ based static website on Azure Storage Account Static Site (Yes, this site you are reading right now\u0026hellip;).\nAs there were quite some steps to go through, I decided to write a full blog post about it. And thatâ€™s exactly what brings you here\u0026hellip;\nIf you are new to Hugo, have a look at the official website for more information: http://gohugo.io\nAlthough Hugo supports both HTML and MarkDown, I decided to go for MarkDown; if this is unknown to you, think of it is a web page markup language, with a specific syntax. While it might feel difficult in the beginning, it is actually rather straight forward to use once you get the hang of it. I could also recommend MarkDown Monster as an editor, if you are not into using Visual Studio Code.\n-\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;-\n1. Getting started Install Hugo on your local machine, from http://gohugo.io (I\u0026rsquo;m running this on Windows 10, but it also supports Linux and MacOS)\nNext, grab a copy of VS Code (http://code.visualstudio.com); this is my favorite for both terminal and MarkDown editing\nFrom within VS Code, Choose \u0026ldquo;Open Folder\u0026rdquo;\nRun Terminal\nRun \u0026ldquo;Hugo version\u0026rdquo;\n2. Create new site Run \u0026ldquo;Hugo new site \u0026lt;sitename\u0026gt;\u0026rdquo;\nHugo default folder structure is created automatically\nDownload (or Clone GitHub) a theme from http://themes.gohugo.io, and extract the folders in the \\themes folder + copy the config.toml from the \\themes folder into the root of the site directory that got created in the previous step.\n(Minimal is what Iâ€™m using for this blog site\nOpen browser http://localhost:1313\nThis opens the sample Theme-based website.\nBase site is working fine;\nStop the running â€œHugo Serverâ€ Process by pressing Ctrl-C in the terminal window.\nto update or add new content, open the Content\\post subfolder\nCreate a new MarkDown document, or copy an existing one as a starting point; I personally use MarkdownMonster or Visual Studio Code as my MarkDown Editor, but any advanced text editor should work out fine.\nFor a cheat sheet of MarkDown code syntax, have a look here: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet\nedit some content in the page\nOnce your new post has been created, open your terminal again, and run â€œHugo Serverâ€. This will start a new web session. Validate from the browser on http://localhost:1313 if the new page is visible\nWorks!!\nHowever, this is â€œonly localâ€ on our dev station; to prepare the site to get published to Azure, compile it by running â€œhugoâ€ from the command prompt (instead of hugo server)\n![](../images/screenshot-2020-04-25-755a8a74.png) This compile process creates the actual â€œweb contentâ€ in a /public/ subfolder in the same directory as the hugo site itself. 3. Publish to Azure Storage Account Deploy an Azure Storage Account v2 as a starter\nFrom Settings, select Static website\nEnter information for the default page and error page\nSave the settings; Your static website URL gets generated and presented\nAs we now have the storage account static site service up-and-running, we can deploy our content. I am using (and recommending!) Visual Studio Code to do this, but you could also copy the content in a manual way by using FTP or Azure Storage Explorer to do this.\nFrom VSCode, add Storage Account Extension\nFrom Command Palette : Azure Storage / Deploy to Static Website\nComplete the prompts with answers from your Azure subscription and setup, and select your Hugo folder as a source. This will copy all files from the /public/ subfolder into the Storage Account $web folder\nWait for the process to complete successfully\nBrowse to the URL of the storage account\nDONT PANIC!! THIS IS EXPECTED\nThe reason this FAILS is because Hugo is using the /public/ subfolder to publish all content; If you are just using flat HTML files in the Static Site, it will work right away.\ntry connecting to the URL path where our blog posts are stored, e.g. ../public/post/\nThe site page itself loads, but itâ€™s not 100% OK ; we need to find a solution for that /public/ URL update. Good news is, Azure has such a solution built-in, which is called Azure Content Delivery Network â€“ CDN.\nLetâ€™s deploy one.\n4. Deploy \u0026amp; Publish with Azure CDN New Resource / CDN /\nProvide a Name + Resource Group and Subscription details\nPricing = Standard Microsoft\nSelect Create a CDN endpoint now\nProvide a unique name for the URL\nOrigin type = Custom\nOrigin hostname = paste in the URL address of the Static Website without the https:// and without the trailing /\nOnce created, open the CDN Profile resource\nSelect the Endpoint you defined under the Endpoints section\nUnder Settings / select Rules engine\nIn the Rules engine / select Add Action / Choose URL Rewrite\nCreate the following rule settings:\nSource Pattern: /\nDestionation: /public/\nPreserve unmatched path: Yes\nSave the changes, and wait for the prompt the configuration has been updated successfully. NOTE: it could take up to 10min before the changes are actually applied and working. So be patient\nOnce received, test the Azure CDN URL from the browser, and click through to a site subsection like Posts or Publications (if you have content in there\u0026hellip;)\nTHIS WORKS!!\nFrom here, you could add a custom domain option, together with integrating HTTPS using CDN Profile settings itself. Iâ€™m sure you will find out now by yourself how to do that.\nHowever, allow me to continue on the scenario, and extending our setup with Azure Front Door, a Global Azure load balancing service, which also performs SSL offloading, Session Affinity, URL rewrite/redirection and probably the most important feature Web Application Firewall.\nNOTE: from here, we wonâ€™t be able to stay within the \u0026lt;$5 /month consumption fee, as the required Azure Front Door rules will add an additional cost of +/- $20 /month, or +/-$50 /month if you enable the WAF option.\n5. Deploying Azure Front Door Technically, for this web site scenario, Azure Front Door offers all capabilities we used before from Azure CDN. Meaning, if you decide to go for Azure Front Door, you do not need Azure CDN.\nNew Resource / Front Door / Create\nComplete the basic settings around Subscription, Resource Group and location\nMove on to the Configuration step next\nClick the + sign to add the Front Door Frontends/domains settings, and provide a unique name for the Front Door URL you want to use\nContinue with Step 2 where you define the backend pool settings. This is basically pointing Front Door to the Static Site Storage Account URL\nClick â€œ**Add BackEnd Poolâ€, and complete the requested parameters:\nBackEnd Host Type: custom\nBackend Host Name: the Static Web Site URL, without the https:// and trailing /\nAccept / leave all other values as default\nConfirm the Backend Pool settings ,which brings you back to the BackendPool settings tab. Here, leave the default values for Health Probes and Load Balancing for now\nConfirm the BackendPool settings, and move on with Step 3, where you will define the necessary routing rule.\nIn the Add Rule parameter section, provide a name for the rule you are about to create. Accept the default values for Accepted Protocol, FrontEnd/domains and Patterns to Match.\nIn the Route details section, scroll down and change the setting for Custom forwarding path, adding the â€œ/public/ path that is used by Hugo (Note: this path could be different, depending on the Hugo Theme; this path technically refers to the directory that Hugo uses to store the compiled site pages.\nWait for the Azure Front Door resource to get created; once this is complete, open your browser to the URL address of Front Door. This should open your web site in a nice looking way\n6. Configure Public Custom Domain Name to Azure Front Door In this last section, we will update our Azure Front Door configuration for a public custom domain name. This is built-in, and integrates a Letâ€™s Encrypt (FREE) SSL/TLS certificate to your web service. How cool is that! **Note: you need to have a public DNS domain name already available, in which you need to create a C-name alias record for the Azure Front Door frontend/domain name you configured. E.g. my 007FFFLearning domain has a â€œwwwâ€ C-Name alias pointing to â€œhugofd.azurefd.netâ€** From the Azure Front Door blade, select Front Door Designer; next, select Frontends/domains, and add a new custom domain\nIf you also want to add the Domain HTTPS option to the configuration, select this setting a bit more down in the blade. If you donâ€™t have a PFX certificate file already for the public domain name space, have one generated by Front Door, by choosing Front Door managed.\nConfirm the settings by clicking Add\nWait about 10 minutes to complete the SSL request status, resulting in the below view:\nOnce the custom domain is configured, we need to make one last change in our routing rules, adding the custom domain to the configuration. Therefore, select the routing rule you created earlier, and update its settings:\nSave the changes, and wait for the update to get applied.\nAfter only a few minutes, everything should be running smooth, allowing you to connect to your website custom domain url:\nSelecting any of the subsections from the header menu, e.g. Posts will be nicely redirected to all posts, and allowing your readers to easily go from one blog article to the other.\nCongrats!! You made it all the way to the end!\nI hope you enjoyed this exercise, and are ready for fully working on your blog website now. Get back into your Visual Studio Code, and start hammering some MarkDown posts!\nAs always, Iâ€™m here for you if you are stuck somewhere, or want to let me know once your site is up-and-running.\nHappy Hugo-ing!\nTake care, Peter\n","date":"2020-04-25T00:00:00Z","permalink":"/post/static_site_frontdoor/","title":"Running an Azure Storage Static Site, protected by Azure Front Door"},{"content":"Due to COVID19, my traveling rhythm went down from an avg 90% the last six years, to 0%. I have never been that much at home without travel than in these last 5 weeks (and a few more to come).\nWhile the primary reason for my travel is business related, every now and then, I have the opportunity to enjoy the area, if even only for a short moment, or in-between delivering sessions. Some locations immediately coming to mind:\nJohannesburg, South Africa (Lion \u0026amp; Rhino Park) Kathmandu, Nepal (overall scenery and the top of the mountains) Seattle, WA, USA (Harbor area, boat trip, Snoqualmie Falls) Ischl, Austria (Closing of skiing season, hiking at the top of the mountains) Bangalore, India (Shivoham Shiva Temple) Bay St Louis, MI, USA (Airbnb stay for 80 days, loved everything) \u0026hellip; and probably another +100 locations at least. What should be clear about those locations, is they are all beautiful, especially the scenery of nature, sometimes it\u0026rsquo;s extraordinary buildings and most often it\u0026rsquo;s both (outside of the great and awesome training experiences)\nSo no doubt about it, I\u0026rsquo;m missing traveling. Missing the walks, the drive-by\u0026rsquo;s, the \u0026ldquo;enjoying my trip\u0026rdquo; feeling.\nAnd that\u0026rsquo;s where I discovered an interesting but yet little gem to install on my laptop: Bing Desktop Wallpaper \u0026ldquo;Exploring the world, one photo at the time\u0026rdquo;.\nIf you have been using Bing as your search engine, or you had the joys of owning a Windows Phone in the past, you pretty well know what I\u0026rsquo;m talking about: Beautiful, stunning pictures from anywhere in the world, changing every day.\nBy installing this tool on your Windows machine, it will show a different image on your desktop wallpaper every morning when starting your laptop. It also stores these images on your machine (C:\\Users\u0026lt;user\u0026gt;\\AppData\\Local\\Microsoft\\BingWallpaperApp\\WPImages), so you can reuse them later on, if you don\u0026rsquo;t like the one from a given day :).\nWhile it is no substitute for the real-life experiences out there, it takes me away every now and then, dreaming for a few seconds about so many unexplored destinations that still need Azure training workshops to be delivered there.\nStay safe and healthy you all!\n/Peter\n","date":"2020-04-19T00:00:00Z","permalink":"/post/bing-wallpaper/","title":"Bing Desktop Wallpapers take you on a trip around the world"},{"content":"Hey,\nIt has been quiet for several weeks on the blog front, but that doesn\u0026rsquo;t mean nothing happened! The core subject of this post is sharing my happiness around being in the Azure Technical Trainer role for more than 6 months now. (Last time I was at Microsoft in 2016, this was about the time where I figured out I needed to move out of the team, exploring other opportunities\u0026hellip;) but not so this time!!\nI\u0026rsquo;m still enjoying my job role to the maximum, although we faced some challenges the last few weeks, like almost everyone else\u0026hellip; a shift from \u0026ldquo;90% travel\u0026rdquo; to \u0026ldquo;0% travel for the next unforeseen time\u0026rdquo;. I remember delivering AZ-300 and AZ-500 courses in the UK, when the COVID-19 situation came closer to Europe, and started to have impact on our lives. At that time, we were still joking about it. The final week involved a delivery in Dusseldorf Germany, during which we got the message all future deliveries would switch to virtuals, at least until the end of April most probably (in meantime it got extended until the end of May, and will probably shift to end of this fiscal I guess\u0026hellip;).\nMicrosoft made an immediate decision (initially as Seattle was the epicenter of COVID-19 outbreak in the US, before NY) to shift all employees to a work from home status, which in my case, meant delivering the same trainings, but all virtual. As I was already used to this from before, I didn\u0026rsquo;t have to make dramatic changes in my delivery style, but I had to learn that attendees are much more distracted, busy with taking care of their families,\u0026hellip; in between attending the sessions. So some deliveries were a bit more slow, having less interaction,\u0026hellip; which is totally understandable.\nOutside of the deliveries themselves, which thankfully keep me busy during the days, I also worked on lab updates for the AZ-500 Azure Security Solutions, and received the results of the beta exam [AZ-120 Planning and Administering Microsoft Azure for SAP workloads](AZ-120 Implementing SAP on Azure), which apparently I passed. Looking forward to some of those deliveries in the near future, as it\u0026rsquo;s been some time since I delivered such training out of my own custom offering in the past.\nWhat amazed me was the Microsoft\u0026rsquo;s Leadership Team (LST) decision to halt all \u0026ldquo;non-paid\u0026rdquo; Azure subscriptions, to provide prioritized datacenter capacity to first responders, healthcare and education (see the full post here: ), resulting in not having Azure Passes available for students to do labs during our Azure trainings (see this note: for details). While this felt weird at first, I actually managed to deliver AZ-300/301, AZ-204 and AZ-400 without those Azure subscriptions. And attendees somehow understood it. Although I have to be honest it required much more effort from my side as the trainer to keep them entertained the full week, doing much more demos, whiteboarding and open Q\u0026amp;A.\nThis last 6 weeks have been challenging from a training delivery perspective, but on the other side also not all that different. I\u0026rsquo;m happy to see how business customers are still allowing their employees to attend training in these difficult times. While I don\u0026rsquo;t really know when exactly, I\u0026rsquo;m sure we will all come better out of this.\nLooking forward to my next \u0026ldquo;live as an ATT\u0026rdquo; post in about 6 months from now, when I\u0026rsquo;m celebrating my 1st year within Microsoft. Who knows what a ride it will be to get there\u0026hellip;\n/Peter\n","date":"2020-04-12T00:00:00Z","permalink":"/post/my-first-6-months-as-att/","title":"My 1st 6 months as an Azure Technical Trainer at Microsoft"},{"content":"You might get amazed, finding out I managed to publish yet another book, where the previous one Efficiently Migrating your workloads to Azure only got published around Christmas.\nHowever, this new one was a \u0026ldquo;longer work in progress\u0026rdquo;, and not something I could spew out in just a few weeks.\nSeveral months ago, I got approached by Packt Publishing, asking me to \u0026ldquo;write a quick note on how I look at Azure strategic implementation and migration\u0026rdquo;. Which was initially sized at max. 20 pages. However, it became clear, this quick writing gig would end up becoming a lot larger, and eventually involve multiple authors, to cover the full Azure platform spectrum.\nAs the core question they started from is my core playground, it honestly didn\u0026rsquo;t take me that long to write down my vision. However, after I sent in my first draft, that\u0026rsquo;s where the real work came in (as well as the decision to make this a larger project). Based on the fast pace of Azure services and overall updates coming out, I wanted to make sure they were captured in the book as well. As - after all - , this guide was written towards technical decision makers, IT managers, cloud architects,\u0026hellip; helping them in making a strategic decision to start moving workloads to Azure. So it better was complete\u0026hellip;\nJumping a few months further in 2019, and literally having lost tracks on the amount of updates I worked on, also based on the tremendous input from technical reviewer , MCT, MVP and overall cloud enthusiast and solution architect Steve Buchanan (@bucatech), I\u0026rsquo;m honored and proud to see this work published on the Microsoft website:\nhttps://azure.microsoft.com/en-us/resources/azure-strategy-and-implementation-guide-third-edition/\nsummarized as follows:\nGet a step by step introduction to using Azure for your cloud infrastructure with this Packt e-book. Read the latest edition of the Azure Strategy and Implementation Guide for detailed information on how to start taking advantage of Azure cloud capabilities. Download this e-book to:\nGet an overview of Azure benefits and best practices for planning your migration. Make cloud architecture and design choices that best fit your organization. Learn how to manage and optimize your new cloud environment. As always, I hope this book maps with your interests and helps in your journey to Azure. Do not hesitate reaching out or sharing your feedback,\n/Peter\n","date":"2020-02-03T00:00:00Z","permalink":"/post/new-book-published/","title":"New book published: Azure Strategy and Implementation Guide, 3rd edition"},{"content":"Things are going fast in the Azure world, and apparently exams are more and more following that pace. Before you start screaming and worrying, as you might be preparing for a current exam, let me share a bit about several steps happening before an exam becomes available.\nExam Objectives (OD) For a long time, Microsoft Learning has based exams on \u0026ldquo;exam objectives\u0026rdquo;, which is typically a list of \u0026ldquo;services, features, activities\u0026rdquo; anyone taking the exam, should master. An easy example from the Exchange Server 2007 time frame could have been:\nUnderstanding that Exchange Server relies on several Windows OS components like Internet Information Server, .NET Framework,\u0026hellip; (services examples);\nKnowing the different Exchange Server Roles, and how they relate to each other;\nHow to enable Unified Messaging;\nWhat PowerShell command to use to repair an Exchange database;\n\u0026hellip;\nso basically more like a \u0026ldquo;stand-alone\u0026rdquo; approach of testing your knowledge, but less focused on testing your skill set.\nThe same approach was about valid for the initial Azure exams (70-532, 70-533, 70-534/535); each exam tested you on a lot of services in the platform, how to deploy them, manage them,\u0026hellip; but not always related to how they are being used \u0026ldquo;in the field\u0026rdquo;.\nJob Task Analysis (JTA) All of that got changed with the new Azure exams, as announced during Ignite conference 2018. A few months before the announcement, I was invited by Microsoft Learning to participate in \u0026ldquo;JTA - Job Task Analysis\u0026rdquo; workshops, together with several other SMEs (Subject Matter Experts), to brainstorm about how Azure components relate to a certain job role.\nEasy said, taking the \u0026ldquo;Azure Administrator AZ-103\u0026rdquo; exam, we discussed on what are the core services in Azure, and what would you need to know about them, in order to relate to your job. Because in the end, there are quite a lot of services in Azure, which you probably hardly (or never) touch on as a typical Azure Administrator. The same goes for all the other job roles we identified (Administrator, Data Scientist, Developer, DevOps Engineer, Security Engineer, Solutions Architect,\u0026hellip;). Based on the outcome of these JTA discussions, new exam questions were created, most of them being more relevant (and harder) to - again - a job role, instead of just testing you on the service or feature. Several Azure exams also got updated with \u0026ldquo;Performance based testing\u0026rdquo;, which means you need to perform a series of actual tasks, in a live Azure Portal. (e.g. deploy a Virtual Network, configure an Azure Backup job based on certain criteria, configure diagnostics logs,\u0026hellip;), so - once more - testing your skill set, in relation to what you are assumed to know, when having that specific job role.\nI personally liked this approach much better than the old system, where now the credential actually proves you have both knowledgeable (theoretical questions) and skill set (practical tasks) experience.\nWhat will change, and why changing again? As already touched on in the first paragraph, the Azure world in all its glory and capabilities, is changing dramatically. New features are coming out on a regular basis, existing services are getting better, more complete, and overall, the different services are easier to get integrated with each other. To keep up with the ever-changing demand in job role on top, it makes total sense the exams are getting updated on a regular basis as well.\nThe following list of Azure exams will get an update, but the old ones will remain for another 90 days after the new ones are getting released, as a transition period:\nAZ-103 becomes AZ-104 (this update will be published in March)\nAZ-203 becomes AZ-204 (this update will be published in late February)\nAZ-300 becomes AZ-303 (this update will be published in March)\nAZ-301 becomes AZ-304 (this update will be published in March)\nImportant to mention is that the exam (certification) title will not change, nor will anyone having the current credential, loose this credential. If you have the AZ-300 certification for example, it will remain valid until its expiration date, even with the AZ-303 being available.\nDoes this mean I can forget about everything I already studied on, and need to start all over? Technically, the newly announced exams, will test you on \u0026ldquo;newer\u0026rdquo; updates out of the Job Task Analysis. This doesn\u0026rsquo;t mean that you will be tested on \u0026ldquo;newer\u0026rdquo; services or features only. One example I could think of, is Azure Containers and Kubernetes Services; these were not part of the exam objective domains in the AZ-103 today, but given the growth and popularity of these services, they might get included in the objective domain for the AZ-104. Although the services and capabilities have been in Azure for a longer time already.\nWhere can we find additional information related to these announced updates?\nAs always, the Microsoft Learn website is the best resource related to Microsoft exams, certification, Microsoft Official Courseware content and more. When all details are ready, they will be exposed on that learn portal immediately.\nhttp://www.microsoft.com/learn\nThese new announced exams will also require some updates to several of the trainings I am delivering out of my current role as Azure Technical Trainer (ATT) within Microsoft. Time to go check on some updates, work on my updated stories, and fine-tune some cool demos and lab guide steps for my attendees. The changes will arrive fast if you ask me!\nAs always, don\u0026rsquo;t hesitate reaching out when having any questions on this topic or another.\nKind regards, Peter\nhttps://azure.microsoft.com/en-us/resources/azure-strategy-and-implementation-guide-third-edition/\nsummarized as follows:\nGet a step by step introduction to using Azure for your cloud infrastructure with this Packt e-book. Read the latest edition of the Azure Strategy and Implementation Guide for detailed information on how to start taking advantage of Azure cloud capabilities. Download this e-book to:\nGet an overview of Azure benefits and best practices for planning your migration. Make cloud architecture and design choices that best fit your organization. Learn how to manage and optimize your new cloud environment. As always, I hope this book maps with your interests and helps in your journey to Azure. Do not hesitate reaching out or sharing your feedback,\n/Peter\n","date":"2020-01-25T00:00:00Z","permalink":"/post/updated-azure-exams/","title":"Updated Azure Exams Announced"},{"content":"I\u0026rsquo;m excited about this next project that finally got live, thanks to a bit of quiet time during the Holidays. While it is not up to the full 100% of what I had in mind, I didn\u0026rsquo;t want to hold back the content any longer, since Azure is moving already that fast\u0026hellip;\nThe original idea of this material was based on a workshop I created for Microsoft Internal (Azure Developer Series) in Sept 2018 as a contractor, which was a combination of slides, videos and lab guides. The workshop existed in both in-person and virtual delivery format. At that time, the sample application was rather basic. Early July 2019, I got asked to work on an update of the content, and extend it with Azure DevOps, which was well-adopted in the market already, but still unknown to a lot. Instead of just working on “updates”, I decided to start from scratch, and work towards a more “business ready” application, using SimplCommerce, an Open Sourced E-commerce platform application, built in .NET / .NET Core, and supporting different database back-ends.\nFlipping the presentations and lab guides into a book seemed like an interesting idea at that time.\nTalking to several people about this, it became clear that – given the focus on the technical side of the Azure platform, together with the focus on the hands-on aspect of the workshop, most vouched for a hands-on guide, and leaving the ‘speaker notes’ behind. Next, me moving to Microsoft as a full-time employee mid September 2019, was another good reason to shorten the format of this book. It would still take me another 3 months (Christmas Holidays aka slower pace\u0026hellip;) to go through all labs again myself, guaranteeing the book was ready for usage, even without having a trainer available to ask questions.\nThis is the first book I’m doing in self-publishing, and my 6th book overall (see http://www.007ffflearning.com/publications) for more details on the other material I wrote along the years.\nThe benefit for you as a reader is that you will get continuous updates. Whether these are bug-fixes, additional chapters/lab steps or major updates to existing labs, you will get notified about it. The advantage for me as the author, is that it is probably one of the best ways to publish content on a topic that is as fast moving as Azure.\nAs always, I hope this book maps with your interests and helps in your journey to Azure. Do not hesitate reaching out or sharing your feedback,\nHere is some more info about the actual book contents:\nHands-On-Lab Scenario You are part of an organization that is running a dotnetcore e-commerce platform application, using Windows Server infrastructure on-premises today, comprising a WebVM running Windows Server 2012 R2 with Internet Information Server (IIS) and a 2nd SQLVM running Windows Server 2012 R2 and SQL Server 2014.\nThe business has approved a migration of this business-critical workload to Azure, and you are nominated as the cloud solution architect for this project. No decision has been made yet on what the final architecture should or will look like. Your first task is building a Proof-of-Concept in your Azure environment, to test out the different architectures possible:\nInfrastructure as a Service (IAAS)\nPlatform as a Service (PAAS)\nContainers as a Service (CaaS)\nAt the same time, your CIO wants to make use of this project to switch from a more traditional mode of operations, with barriers between IT sysadmin teams and Developer teams, to a DevOps way of working. Therefore, you are tasked to explore Azure DevOps and determine where CI/CD Pipelines can assist in optimizing the deployment and running operations of this e-commerce platform, especially when deploying updates to the application.\nAs you are new to the continuous changes in Azure, you want to make sure this process goes as smooth as possible, starting from the assessment to migration to day-to-day operations.\nAbstract and Learning Objectives This workshop enables anyone to learn, understand and build a Proof of Concept, in performing a multi-tiered .Net Core web application (SimplCommerce Open Source http://www.simplcommerce.com) using Microsoft SQL Server database, platform migration to Azure public cloud, leveraging on different Azure Infrastructure as a Service, Azure Platform as a Service (PaaS) and Azure Container offerings like Azure Container Instance (ACI) and Azure Kubernetes Services (AKS).\nImmediately in lab 1, students get introduced to the basics of automating Azure resources deployments using Visual Studio and Azure Resource Manager (ARM) templates. Next, readers learn about the importance of performing proper assessments, and what tools Microsoft offers to help in this migration preparation phase. Once the application has been deployed on Azure Virtual Machines, students learn about Microsoft SQL database migration to SQL Azure PaaS, as well as deploying and migrating web applications to Azure Web Apps.\nAfter these foundational platform components, the next exercises will totally focus on the core concepts and advantages of using containers for running business workloads, based on Docker, Azure Container Registry (ACR), Azure Container Instance (ACI) and WebApps for Containers, as well as how to enable container orchestration and cloud-scale using Azure Kubernetes Service (AKS).\nIn the last part of the workshop, readers get introduced to Azure DevOps, the new Microsoft Application Lifecycle environment, helping in building a CI/CD Pipeline to publish workloads using the DevOps principals and concepts, showing the integration with the rest of the already touched on Azure services like Azure Web Apps and Azure Kubernetes Services (AKS), closing the workshop with a module on overall Azure monitoring and operations and what tools Azure has available to assist your IT teams in this challenge.\nThe focus of the material is having a Hands-On-only Lab experience, by going through the following exercises and tasks:\n· Deploying a 2-tier Azure Virtual Machine (Webserver and SQL database Server) using ARM-template automation with Visual Studio 2019;\n· Publishing a .NET Core e-commerce application to an Azure Web Virtual Machine and SQL DB Virtual Machine;\n· Performing a proper assessment of the as-is Web and SQL infrastructure using Microsoft Assessment Tools;\n· Migrating a SQL 2014 database to Azure SQL PaaS (Lift \u0026amp; Shift);\n· Migrating a .NET Core web application to Azure Web Apps (Lift \u0026amp; Shift);\n· Containerizing a .NET Core web application using Docker, and pushing to Azure Container Registry (ACR);\n· Running Azure Container Instance (ACI) and WebApp for Containers;\n· Deploy and run Azure Azure Kubernetes Services (AKS);\n· Deploying Azure DevOps and building a CI/CD Pipeline for the subject e-commerce application;\n· Managing and Monitoring Azure Kubernetes Services (AKS);\nAt last\u0026hellip;, I also want to thank Amita Thukral, a far-away friend from India, with whom I had the pleasure to work along the years when doing Azure virtual workshop deliveries, where she was moderating the questions from the audience, and overall a very nice and professional person to work with. She did a tremendous job in screening the scenario, going through all lab steps, to make sure it all made sense. Even for less Azure-experienced folks.\nIf this got your attention, head over to Leanpub, and grab yourself a copy of the book. And start learning Azure :).\nLooking forward to your feedback,\nbest regards, Peter /Peter\n","date":"2020-01-03T00:00:00Z","permalink":"/post/new-azure-book-selfpublish/","title":"Happy to announce my newest Azure book got (self) published"},{"content":"Earlier this week, I discovered \u0026quot;[Azure Mystery Mansion]https://www.microsoft.com/mysterymansion\u0026quot;, published by the Microsoft Azure marketing team. As an Azure enthusiast, I wanted to find out about this mystery ;), actually being really excited about it, as it is primarily focused around giving you an easy, fun, yet interesting learning experience.\nWhat is it? It heavily reminded me of several PC games I played as a teenager (yes, early \u0026rsquo;90s), where you could move around a game, and even deciding on the flow and ending, by clicking keywords. Behind every phrase was a new adventure, and it often felt like you really owned the game, or had a conversation with the characters.\nThe flow of the Azure Mystery Mansion game is no different! Standing in front of the house, the goal is to walk around the different rooms, and solve \u0026ldquo;puzzles\u0026rdquo;. When the puzzle is completed successfully, you win a key. You need 8 different keys to finish the game.\nHonestly, literally living in Azure full-time for the last 6 years, answering the puzzles wasn\u0026rsquo;t the hardest for me. But even if you are totally new to Azure, each puzzle gives you enough hints (actually redirections to Azure documentation at Microsoft Learn), to make sure you can still enjoy (and complete) the game.\nAnd it actually is an easy, fun yet interesting way to learn about Azure - ow wait, I already said that :)\nWhen you manage to complete the game, you satisfied your Uncle Bill\u0026rsquo;s will, and become the owner of the house. Next to that, you receive a cool badge to show off on Twitter (and tease/invite more people to play the game).\n(Note: if you want to know more about how the game idea started, as well as learn about some of the technologies used to build it, have a look at Jen Looper\u0026rsquo;s blog article here\nHave fun playing the game, and learning Azure!\n/Peter\n","date":"2019-12-31T00:00:00Z","permalink":"/post/azure-mystery-mansion/","title":"Azure Mystery Mansion"},{"content":"As I got asked so many times what Azure learning resources are available, I thought this could make up an excellent blog post :) - This is actually based on a summary slide I have added to my in-person and online Azure training workshops closing deck, but updated where needed. Below order is a random listing of resources, not forcing any priority or preference:\nMicrosoft Learn\nMicrosoft Hands-on Labs\nAzure Docs\n3rd party learning resources\nLet me guide you through each and one of them:\n1. Microsoft Learn (http://www.microsoft.com/learn)\nMicrosoft Learn is \u0026ldquo;the\u0026rdquo; landing page for all learning resources Microsoft has to offer, not just Azure. Here, you find a listing of all current learning paths, pointers to hands-on lab exercises in a sand-boxed setup, an overview of Microsoft certifications and exams, and much more. It also points you to the official Microsoft Docs website (see below).\nBy selecting Browse all paths or Browse all learning options, you are redirected to the actual Learning Paths. A Learning Path is a collection of learning material, which can be documentation, a training video and/or an exercise. Most of the time, it is really a combination of all 3 flavors. This reflects to the different learning styles people have. Some learn better from reading (docs), some other like to hear and see (video), where other - including myself - mainly learn by doing (hands-on labs).\nUsing the filters on the side, you can find the specific Azure material, or even drill down on specific Azure services or features you want to focus on. (About 30% of all learning path material is related to Azure\u0026hellip;)\nNext, you can choose from the full Learning Path, giving you several hours of content to go through, or pick stand-alone modules, typically shorter (30-90 mins) and more focused.\nIn this example, I filtered on Azure / Functions, which brings up a list of 2 Learning Paths and 15 Modules (at the time of writing, it might change over time :)). Let me select the Create Serverless Applications Learning Path; this opens a list of stand-alone content, again nicely structured per topic. Each topic is again a collection of shorter snippets.\nI hope this gets you going in your Azure-learning journey. But wait, there is more ;)\n2. Microsoft Hands-on Labs (http://www.microsoft.com/handsonlabs/selfpacedlabs)\nCompared to the \u0026ldquo;watch or read\u0026rdquo; approach from the Microsoft Learn Learning Paths, the Microsoft Hands-on Labs offer you self-paced labs, focusing on a \u0026ldquo;learn by doing\u0026rdquo; concept.\nAgain, this source is not just offering Azure material, but covers most of the Microsoft product stack (Office 365, .NET development, Windows Server, Windows Client,\u0026hellip;).\nIf you filter again on Azure content, it currently shows 30 different labs, from beginners to advanced learner level.\nThe most interesting aspect - besides learning by doing of course - is you don\u0026rsquo;t need an Azure subscription to perform the lab steps. While there is still a separate URL to get here, it actually redirects you back to the overall Microsoft Learn website. However, there is no easy way to retrieve the hands-on labs only (Microsoft, make this available as a learning type option please). So it requires some wondering around the website, browsing Learning Paths and Modules, to find any resource having \u0026ldquo;exercise\u0026rdquo; in the title.\nAs an example, I selected the \u0026ldquo;Create a Windows Virtual Machine\u0026rdquo;; as you can see from below screenshot, it offers you to activate sandbox. This creates a temporary Azure subscription, dedicated to this specific lab scenario. One can activate 10 such sandbox environments per day, which should be more than enough for most learners.\nAfter giving consent using a Microsoft account (Outlook, Hotmail,\u0026hellip;) (Office 365 doesn\u0026rsquo;t seem to work here?), it will add a temporary subscription to your Microsoft account credentials, in a dedicated Microsoft Learn Sandbox Azure Tenant.\nFrom here, you can literally follow the instructions from the exercise description pages. Pretty sweet in my opinion!\n3. Azure Docs (https://docs.microsoft.com/en-us/azure)\nNo better resource to learn about Azure than the official Azure Documentation! While it is obviously not built as a learning to per sÃ©, it actually does the job really well! Starting from a high-level overview of Azure services, one can easily drill down to the specific topic you want to learn about. For most services, this will list up a \u0026ldquo;tutorial\u0026rdquo; section. That\u0026rsquo;s where you find most useful \u0026ldquo;how-to\u0026rdquo; documentation and guides. If that\u0026rsquo;s what you are looking for.\nUsing a similar example as before, I made the following selections:\nGet started with Azure / Deploy Infrastructure\nWindows Virtual Machines\n(Notice the link to the previously discussed self-paced training also shows up here)\nWhich brings me to the actual Azure doc pages, describing how to create an Windows Virtual Machine in Azure. From here, I can scroll down to the specific deployment approach I want to learn, being PowerShell, Azure CLI or using the Portal.\nBesides reading through the different steps, you can also try them out live, assuming you have an Azure Free or Paid subscription already (in contrast to the sandbox scenario described earlier).\n4. Third party learning resources Aside from the above 3 Microsoft-owned resources, there is a huge amount of (free and paid) Azure learning material available on the internet. My recommendation is to try and filter on content which is less than 6 months old, as otherwise it might probably be too outdated (depending a bit on the Azure service).\nWithout trying to be complete, below is a list of learning partners offering some very good and up-to-date content on Azure. I have authored several videos for the first 3 listed here, but don\u0026rsquo;t exclusively check those. Several also provide a multi-day trial subscription, which could be just enough to learn about that one specific Azure service.\nOpsgility\nPackt\nApress\nPluralsight\nA Cloud Guru\nUdemy\nYoutube\nI hope this article gives you enough insights on different Microsoft and 3rd party Azure learning resources available today.\nDon\u0026rsquo;t hesitate reaching out if you have any questions on the discussed content.\n/Peter\n","date":"2019-12-22T00:00:00Z","permalink":"/post/azure-learning-resources/","title":"Azure Learning Resources"},{"content":"Hey,\nAs most of you probably know in meantime, I joined Microsoft as a full-time employee mid September this year, in the prestigious Azure Technical Trainer team - EMEA. This team is part of the WorldWide Learning (WWL) organization, and working on the Enterprise Skills Initiative (ESI) program.\nAs the name says, as Azure Technical Trainer, my role is to provide training to larger Microsoft Partners and customers in EMEA, helping them in being successful in using Azure, and mainly focusing on skilling up the IT teams (sysadmins, developers, solution architects). While the training starts from the Microsoft Official Courseware (MOC), we are not bound to only using that material. Which was a huge plus to me, knowing I built and delivered most of the workshops using my own content (or vendor content I was involved in building) before.\nAnother big part of the fun in my role, is the traveling. Having traveled around the globe to deliver workshops for the last 6 years, I found out it is a huge source of inspiration to me, seeing how Azure is being adopted differently, depending on where in the world I am. I had the joy to visit some cool places like Sofia, Wroclaw, Krakow, Johannesburg, Manchester, Cork, Cardiff\u0026hellip; and so many more! (the job description said 75% travel, but I enjoyed the 93% :)) 14 different locations in as many weeks!\nNext, given my +6 years focus on Azure, I love the fact my senior leads trust me in delivering all different Azure workshops right away. As I was familiar with most content already, this was OK for me. I\u0026rsquo;ve delivered several of these:\nAZ-900 (Azure Fundamentals),\nAZ-103 (Azure Administrator),\nAZ-203 (Developing Azure Solutions),\nAZ-300/301 (Architecting Azure Solutions).\nBeginning of 2020, I will also start delivering\nAZ-400 (Azure DevOps)\nAZ-500 (Azure Security Solutions)\nas demand for those more advanced courses is growing. Which to me means, the usage of Azure in the field is also growing, and becoming more mature.\nOutside of the workshop delivery, which is the biggest part of my job, I\u0026rsquo;m also involved in a few virtual teams, working on content optimization for the AZ-103 material, and another one where we are brainstorming and working towards making content better for hybrid training delivery (both in-person and online at the same time).\nSo far, this job is a perfect match with my skills, my interest and overall what I have loved doing for the last 6 years. Hoping to continue expanding my skills by delivering other Azure workshops that are sometimes a bit out of my comfort zone, and probably going to try and take a step towards the Artificial Intelligence (AI) courses we have on offer as well. (Currently working on a license plate scanning tool, simulating a toll booth system, to have some cool demos during my workshops).\nThere is always something new in Azure, and it maps with my personal \u0026ldquo;keep on learning\u0026rdquo; mentality.\nAs I always loved sharing knowledge, I will try and continue doing this from here, providing you technical blog posts, among some short-snippet videos every now and then, on some Azure service or feature.\nWishing you all happy holidays, and stay tuned for more Azure news in 2020.\n/Peter\n","date":"2019-12-15T00:00:00Z","permalink":"/post/1st-quarter-as-att/","title":"My 1st quarter as an Azure Technical Trainer at Microsoft"},{"content":"Hi, my name is Peter De Tender. I am a Microsoft Technical Trainer (MTT) within the Microsoft World Wide Learning Organization. Out of this role, I provide Azure readiness workshops to larger Microsoft customers and partners across the globe, skilling up their Azure knowledge, and preparing them for Azure certification. Having lived in Belgium for 45 years, I recently relocated to Redmond, WA, USA, to continue my role as MTT out of the West US team. Before taking on this position, I was already an Azure trainer and Azure Solution Architect out of my own business, with a background in Microsoft datacenter consulting (Exchange, Forefront, System Center and Active Directory security as prime technologies, besides HP Servers and NetApp Storage :). Although I switched to the blue badge life, I continue providing Azure readiness content here, as well as continue participating in user group events and presenting at different global conferences, in person or virtual.\nI am a Microsoft Certified Trainer since 2010 (officially, but trained already before).\nI was recognized as a Microsoft MVP (Most Valuable Professional) since 2013, initially in Windows IT Pro, but switched to Azure from 2015-2019.\nSince 2011, I am the chairman for EMEA of the IAMCT (International Association of Microsoft Certified Trainers), a global community of MCT\u0026rsquo;s, outside of Microsoft Learning.\nI\u0026rsquo;m married for almost 27 years with my loving wife Els, and am the proud father of 2 wonderful girls and a cat.\nYou can reach me by email peter at pdtit dot be or on Twitter @pdtit or @007FFFLearning\n","date":"0001-01-01T00:00:00Z","permalink":"/about/","title":"About Me"},{"content":" Conference / User Group Event Date Session Topic URL MMS 2026 at MOA 2026-05 DevSecOps with GitHub Advanced Security (GHAS) https://mmsmoa.com/mms2026moa SREday Seattle 2026-04-21 Agentic DevOps with GitHub Copilot https://sreday.com/2026-seattle-q2/Peter_De_Tender_Microsoft_Agentic_DevOps_with_GitHub_Copilot PowerShell + DevOps Global Summit 2026 (Bellevue) 2026-04 Achieving SRE (Site Resiliency Engineering) with Azure https://PowerShellSummit.org Azure Spring Clean 2026 2026-03 DevSecOps with GitHub Advanced Security (GHAS) https://www.azurespringclean.com NDC London 2026 Speaker profile / Azure talks https://ndclondon.com/speakers/peter-de-tender Festive Tech Calendar 2025 2025-12 DevSecOps with GitHub Advanced Security (GHAS) https://festivetechcalendar.com MMS 2025 at MOA (Minneapolis area) 2025-05 Cut Costs, Not Corners in Azure Monitor https://mms2025atmoa.sched.com/event/b4572ab5d6ab924b8d754cd29fd59862 Conf42 SRE 2025-04-17 Azure Load Testing in action https://www.conf42.com/Site_Reliability_Engineering_SRE_2025_Peter_De_Tender_azure_load_testing PowerShell + DevOps Global Summit 2025 (Bellevue) 2025-04 Unlocking Productivity with GitHub Copilot https://www.youtube.com/watch?v=YzniuduA3cs North America MCT Summit 2025 2025-03 Using Jupyter Notebooks to create compelling and interactive Azure CLI demos and more\u0026hellip; https://namctsummit.com/ Cloud8 Virtual Summit 2025 2025-02 Azure Developer CLI - deploying end-to-end Azure environments, for non-developers https://www.cloudeight.ch/ Festive Tech Calendar 2024 2024-12 Developing Custom Copilots with Azure AI Studio, PromptFlow and .NET https://festivetechcalendar.com Azure Back to School 2024 2024-09 Application Insights - Inside-Out https://azurebacktoschool.github.io SciFiDevCon 2024 2024-05 Building a Marvel Hero App using Blazor and .NET8 https://www.007ffflearning.com/post/building-a-marvel-hero-app-with-blazor-.net8/ Constant Call for Speakers - MC2MC events 2024-04 Azure DevOps, pipelines to bring cloud magic https://www.mc2mc.be PowerShell + DevOps Global Summit 2024 (Bellevue) 2024-04 Microsoft DevOps Solutions or how to integrate the best of Azure DevOps and GitHub https://PowerShellSummit.org North America MCT Summit 2024 2024-03 Azure AI Document Intelligence - OCR on steroids https://sessionize.com/pdtit/ 90DaysOfDevOps - 2024 Community Edition 2024-01 DevSecOps as an approach to building and deploying secure applications by “shifting left” https://github.com/MichaelCade/90DaysOfDevOps Global AI Conference 2023 2023-12 Azure AI - OCR on Steroids https://www.007ffflearning.com/post/festive-2023-ocr-on-steroids/ Festive Tech Calendar 2023 2023-12 How .NET Blazor moved me from Infrastructure to Developer at age 47 https://festivetechcalendar.com Live! 360 Orlando 2023 2023-11 Achieving SRE (Site Resiliency Engineering) with Azure https://live360events.com PowerShell + DevOps Global Summit 2023 (Bellevue) 2023-04 Achieving SRE (Site Resiliency Engineering) with Azure https://PowerShellSummit.org Azure Spring Clean 2023 2023-03 ACR, ACI, ACS, AKS, DCK\u0026hellip; aka the Container Alphabet Soup https://www.azurespringclean.com Festive Tech Calendar 2022 2022-12 Building a Marvel heroes webapp using Blazor https://festivetechcalendar.com Gimme Cloud Talks (User Group) 2022-09 Achieving SRE (Site Resiliency Engineering) with Azure http://gimmecloudtalks.com Azure Back to School 2022 2022-09 Achieving SRE (Site Resiliency Engineering) with Azure http://azurebacktoschool.com Azure Global Bootcamp 2022 - MTT edition 2022-05 Mastering Chaos Engineering with Azure Chaos Studio https://sessionize.com/pdtit/ Irish Techie Talks (User Group) 2022-05 ACR, ACI, ACS, AKS, DCK\u0026hellip; aka the Container Alphabet Soup https://www.youtube.com/channel/UCz-9-A41E5LUtEzkm2W4Iiw Azure Spring Clean 2022 2022-03 Achieving SRE (Site Resiliency Engineering) with Azure https://www.azurespringclean.com Limerick DotNet-Azure User Group 2022-01 DevSecOps as an approach to building and deploying secure applications by “shifting left” https://www.meetup.com/Limerick-DotNet/ Azure User Group Sweden 2022-01 Achieving SRE (Site Resiliency Engineering) with Azure https://www.meetup.com/azureusergroupsundsvallsverige/ Festive Tech Calendar 2021 2021-12 Achieving SRE (Site Resiliency Engineering) with Azure https://festivetechcalendar.azurewebsites.net/ Azure Community Conference 2021 2021-10 Achieving SRE (Site Resiliency Engineering) with Azure https://azconf.dev Azure Bootcamp South Africa 2021 2021-09 Achieving SRE (Site Resiliency Engineering) with Azure https://sessionize.com/pdtit/ Azure Back to School 2021 2021-09 Achieving SRE (Site Resiliency Engineering) with Azure https://azurebacktoschool.tech/ Cloud Lunch and Learn Marathon 2021 2021-05 ACR, ACI, ACS, AKS, DCK\u0026hellip; aka the Container Alphabet Soup https://www.cloudlunchlearn.com Virtual Scottish Summit 2021 2021-02 Azure DevOps, pipelines to bring cloud magic https://ScottishSummit.com AzConf 2020 2020-11 Azure loves Terraform loves Azure http://azconf.dev Collabdays Lisbon 2020 2020-10 Azure loves Terraform loves Azure https://www.collabdays.org/2020-lisbon/ Azure Day Rome 2020 2020-06 Azure is 100% High-Available\u0026hellip; or is it? http://www.azureday.it Live! 360 Orlando 2019 2019-11 Azure loves Terraform loves Azure https://live360events.com TechMentor Microsoft HQ 2019 2019-08 Azure is 100% High-Available\u0026hellip; or is it? https://techmentorevents.com Microsoft Techdays 2019 2019-02 Azure loves Terraform loves Azure http://www.techdaysfi.com/ TechMentor Redmond 2018 Speaker profile / Azure sessions https://techmentorevents.com/events/redmond-2018/speakers/speaker%20window.aspx?SpeakerId=%7B35F4D4CF-07AA-4250-A33A-5DF48A899DEE%7D CloudBrew 2017 - A full-day Microsoft Azure event 2017 Building your Azure dashboards like a fighter jet pilot https://sessionize.com/pdtit/ MVPDays Online November 2018 2018 AZ-123, or what has changed in the Azure certification landscape https://sessionize.com/pdtit/ Live! 360 Orlando 2018 2018 Azure Security Unchained https://sessionize.com/pdtit/ Techorama 2018 2018 Exam Prep session for Azure Architects 70-535 https://sessionize.com/pdtit/ MVPDays Online October 2018 2018 Open Source database solutions in Azure https://sessionize.com/pdtit/ Experts Live Europe 2018 2018 The Future of Windows Server Expert Panel https://sessionize.com/pdtit/ Intelligent Cloud Conference 2019 2019 AZ-123, or what has changed in the Azure certification landscape https://sessionize.com/pdtit/ Techorama Belgium 2019 2019 AZ-123, or what has changed in the Azure certification landscape https://sessionize.com/pdtit/ Experts Live Europe 2019 2019 Become the greatest Azure-bender - Tame that cloud for what you need it https://sessionize.com/pdtit/ Global Azure Bootcamp 2019 2019 Deep inspecting your Azure network traffic using Azure Network Watcher https://sessionize.com/pdtit/ Cloud \u0026amp; Datacenter Conference Germany 2019 2019 Deep inspecting your Azure network traffic using Azure Network Watcher https://sessionize.com/pdtit/ EXPERTS LIVE NETHERLANDS 2019 2019 Deep inspecting your Azure network traffic using Azure Network Watcher https://sessionize.com/pdtit/ MVPDays Online January 2019 2019 Docker for IT Pro\u0026rsquo;s https://sessionize.com/pdtit/ MVPDays Online February 2019 2019 Mastering Azure with Visual Studio Code https://sessionize.com/pdtit/ CloudBrew 2019 - A two-day Microsoft Azure event 2019 Mastering Azure with Visual Studio Code https://sessionize.com/pdtit/ Microsoft Techdays 2020 2020 7 Habits Every Azure Admin Must Have https://sessionize.com/pdtit/ IglooConf 2020 2020 Azure Security at your service (Learn Azure Security Center and Azure Sentinel) https://sessionize.com/pdtit/ Live! 360 Orlando 2021 2021 Become the greatest Azure-bender - Tame that cloud for what you need it https://sessionize.com/pdtit/ DevSum2021 2021 Hands-on-labs: Migrating a .NET legacy application to Azure Container Services https://sessionize.com/pdtit/ Techorama 2021 Spring Edition 2021 The good, the bad and the ugly of Infrastructure as Code https://sessionize.com/pdtit/ DevSum19 Archived AZ-123, or what has changed in the Azure certification landscape https://sessionize.com/pdtit/ Azure Lowlands Archived Azure Security Unchained https://sessionize.com/pdtit/ ITproud Archived Azure Stack from the trenches, what you need to know https://sessionize.com/pdtit/ Michigan Azure and M365 User group Archived Become the greatest Azure-bender - Tame that cloud for what you need it https://sessionize.com/pdtit/ MVPDays \u0026ldquo;Azure Security Center and Windows Defender ATP\u0026rdquo; Day Archived Becoming and Staying compliant, thanks to Azure Security Center https://sessionize.com/pdtit/ EuropeClouds Summit Archived Hands-on-labs: Migrating a .NET legacy application to Azure Container Services https://sessionize.com/pdtit/ Microsoft Build Archived Open Source database solutions in Azure https://sessionize.com/pdtit/ M365 Chicago - Virtual Event Archived Secure your cloud user\u0026rsquo;s identity end-to-end https://sessionize.com/pdtit/ Azure ATT Global Bootcamp - online event Archived test session https://sessionize.com/pdtit/ Sessionize Speaker Sessions (master list) 2018-2026 Detailed session abstracts and session URLs https://sessionize.com/pdtit/ ","date":"0001-01-01T00:00:00Z","permalink":"/speaking/","title":"Public Speaking"}]