Microsoft August 2025 Patch Tuesday, (Tue, Aug 12th)

This post was originally published on this site

This month's Microsoft patch update addresses a total of 111 vulnerabilities, with 17 classified as critical. Among these, one vulnerability was disclosed prior to the patch release, marking it as a zero-day. While none of the vulnerabilities have been exploited in the wild, the critical ones pose significant risks, including remote code execution and elevation of privilege. Users are strongly advised to apply the updates promptly to safeguard their systems against potential threats.

Google Paid Ads for Fake Tesla Websites, (Sun, Aug 10th)

This post was originally published on this site

In recent media events, Tesla has demoed progressively more sophisticated versions of its Optimus robots. The sales pitch is pretty simple: "Current AI" is fun, but what we really need is not something to create more funny kitten pictures. We need AI to load and empty dishwashers, fold laundry, and mow lawns. But the robot has not been for sale yet, and there is no firm release date.

Do sextortion scams still work in 2025?, (Wed, Aug 6th)

This post was originally published on this site

Sextortion e-mails have been with us for quite a while, and these days, most security professionals tend to think of them more in terms of an “e-mail background noise” rather than as if they posed any serious threat. Given that their existence is reasonably well-known even among general public, this viewpoint would seem to be justified… But are sextortion messages really irrelevant as a threat at this point, and can we therefore safely omit this topic during security awareness trainings?

Introducing MCP Support in AI Shell Preview 6

This post was originally published on this site

AI Shell Preview 6 is here!

We are super excited to announce the latest preview release of AI Shell. This release focuses on
enhancing the user experience with new features, improved error handling, and better integration
with Model Context Protocol (MCP) tools.

What’s new at a glance

  • MCP client integration
  • Built-in tools
  • Resolve-Error command improvements
  • Aliases and flows for staying in your terminal

MCP Integration

AI Shell now acts as an MCP client, which allows you to add any MCP server to your AI Shell
experience. Connecting to an MCP server massively improves the capability of your AI Shell giving
you the tools that provide more relevant data or carry out actions!

AI Shell MCP Client

Adding MCP Servers

To add an MCP server, create an mcp.json file in $HOME.aish folder. The following example
shows two MCP servers: everything and filesystem. You can add any MCP servers you want.

{
    "servers": {
      "everything":{
        "type":"stdio",
        "command":"npx",
        "args":["-y", "@modelcontextprotocol/server-everything"]
      },
      "filesystem": {
        "type": "stdio",
        "command": "npx",
        "args": [
          "-y",
          "@modelcontextprotocol/server-filesystem",
          "C:/Users/username/"
        ]
      }
    }
  }

If it’s a remote MCP server, change the type to https. You know that you have successfully added
an MCP server when you see it in the AI Shell UI. You can confirm that it’s running by checking the
status of the server through the /mcp command. Using /mcp also lists each MCP Server and the
tools available.

Screenshot of MCP servers registered in AI Shell

NOTE

You must have Node.js or uv installed to use MCP servers that
use those command lines tools.

Standalone experience with AI Shell and MCP Servers

MCP servers enhance your standalone experience with AI Shell, allowing your command line to use MCP
servers and AI to perform tasks. For example, @simonb97/server-win-cli is an MCP server that
allows you to run commands on your Windows machine, whether it be PowerShell, CMD, Git Bash, or any
configured shell you use! It also provides configuration settings to define which commands and
operations are allowed to run.

CAUTION

Please note this is a community MCP server and not an
official Microsoft MCP Server. We encourage you to do your own research and testing before using
it.

AI Shell with MCP Server

Additional MCP servers:

Built-in Tools for AI Shell

This release introduces built-in tools that are now accessible to agents within AI Shell. These
commands are similar to MCP Server tools, but are exclusive to the AI Shell experience. These tools
are designed to enhance the AI Shell experience by providing context-aware capabilities and
automation features. They can be used in conjunction with the MCP servers to create a powerful
AI-driven shell environment.

Tool Name Description
get_working_directory Get the current working directory of the connected PowerShell session, including the provider name (e.g., FileSystem, Certificate) and the path (e.g., C:\, cert:\).
get_command_history Get up to 5 of the most recent commands executed in the connected PowerShell session.
get_terminal_content Get all output currently displayed in the terminal window of the connected PowerShell session.
get_environment_variables Get environment variables and their values from the connected PowerShell session. Values of potentially sensitive variables are redacted.
copy_text_to_clipboard Copy the provided text or code to the system clipboard, making it available for pasting elsewhere.
post_code_to_terminal Insert code into the prompt of the connected PowerShell session without executing it. The user can review and choose to run it manually by pressing Enter.
run_command_in_terminal This tool allows you to execute shell commands in a persistent PowerShell session, preserving environment variables, working directory, and other context across multiple commands.
get_command_output Get the output of a command previously started with run_command_in_terminal.

Note

The built-in tools rely on the side-car experience with a
connected PowerShell session and provide enhanced context awareness and automation capabilities.

Here is a simple demo showing how you can have AI Shell run commands on your behalf using the
run_command_in_terminal tool:

Run command in terminal tool

This example shows how additional context is provided to AI Shell to improve results:

Getting more context with built-in tools

You can also use the get_terminal_content tool to get the content from the connected terminal and
provide it to AI Shell to help it understand what you are trying to do:

Getting content from the screen ran before AI Shell starts to get assistance

Resolve-Error Command Improvements

Previously the Resolve-Error command was only able to run after an error occurred in the previous
command. Now, Resolve-Error identifies which command the user wants to troubleshoot:

  • If the last error’s command matches the most recent command in history, it’s assumed to be the one
    the user is interested in.
  • If the last error’s command isn’t the most recent and $LastErrorCode is null or zero, the error
    likely comes from an earlier command, not the very last one.
  • If $LastErrorCode is non-zero and $? is false, the last command was a failing native command.
  • If $LastErrorCode is non-zero but $? is true, it’s unclear which command or failure the user
    is focused on, so the agent analyzes the terminal content to determine the relevant context.

This logic allows AI Shell to better understand what the error the user is trying to resolve is
rather than requiring you to ask for AI’s help immediately after an error occurs.

Staying in your shell

The Invoke-AIShell and Resolve-Error commands allow you to stay in your working terminal to
interact with the AI Shell agent. To learn more about the parameters added, see the
previous blog post that details these features. For your convenience, these commands have
aliases that make them quicker to use.

Command Name Alias
Invoke-AIShell askai
Resolve-Error fixit

Fixing an error and utilizing fixit and askai commands

Conclusion

We hope that these enhancements make your experience with AI Shell more powerful! We are always
looking for feedback and suggestions, so please submit issues or feature requests in our
GitHub repository.

Thank you so much!

AI Shell Team

Steven Bucher & Dongbo Wang

The post Introducing MCP Support in AI Shell Preview 6 appeared first on PowerShell Team.

Introducing Amazon Elastic VMware Service for running VMware Cloud Foundation on AWS

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic VMware Service (Amazon EVS), a new AWS service that lets you run VMware Cloud Foundation (VCF) environments directly within your Amazon Virtual Private Cloud (Amazon VPC). With Amazon EVS, you can deploy fully functional VCF environments in just hours using a guided workflow, while running your VMware workloads on qualified Amazon Elastic Compute Cloud (Amazon EC2) bare metal instances and seamlessly integrating with AWS services such as Amazon FSx for NetApp ONTAP.

Many organizations running VMware workloads on premises want to move to the cloud to benefit from improved scalability, reliability, and access to cloud services, but migrating these workloads often requires substantial changes to applications and infrastructure configurations. Amazon EVS lets customers continue using their existing VMware expertise and tools without having to re-architect applications or change established practices, thereby simplifying the migration process while providing access to AWS’s scale, reliability, and broad set of services.

With Amazon EVS, you can run VMware workloads directly in your Amazon VPC. This gives you full control over your environments while being on AWS infrastructure. You can extend your on-premises networks and migrate workloads without changing IP addresses or operational runbooks, reducing complexity and risk.

Key capabilities and features

Amazon EVS delivers a comprehensive set of capabilities designed to streamline your VMware workload migration and management experience. The service enables seamless workload migration without the need for replatforming or changing hypervisors, which means you can maintain your existing infrastructure investments while moving to AWS. Through an intuitive, guided workflow on the AWS Management Console, you can efficiently provision and configure your EVS environments, significantly reducing the complexity to migrate your workloads to AWS.

With Amazon EVS, you can deploy a fully functional VCF environment running on AWS in a few hours. This process eliminates many of the manual steps and potential configuration errors that often occur during traditional deployments. Furthermore, with Amazon EVS you can optimize your virtualization stack on AWS. Given the VCF environment runs inside your VPC, you have full full administrative access to the environment and the associated management appliances. You also have the ability to integrate third-party solutions, from external storage such as Amazon FSx for NetApp ONTAP or Pure Cloud Block Store or backup solutions such as Veeam Backup and Replication.

The service also gives you the ability to self-manage or work with AWS Partners to build, manage, and operate your environments. This provides you with flexibility to match your approach with your overall goals.

Setting up a new VCF environment

Organizations can streamline their setup process by ensuring they have all the necessary pre-requisites in place ahead of creating a new VCF environment. These prerequisites include having an active AWS account, configuring the appropriate AWS Identity and Access Management (IAM) permissions, and setting up a Amazon VPC with sufficient CIDR space and two Route Server endpoints, with each endpoint having its own peer. Additionally, customers will need to have their VMware Cloud Foundation license keys ready, secure Amazon EC2 capacity reservations specifically for i4i.metal instances, and prepare their VLAN subnet information planning.

To help ensure a smooth deployment process, we’ve provided a Getting started hub, which you can access from the EVS homepage as well as a comprehensive guide in our documentation. By following these preparation steps, you can avoid potential setup delays and ensure a successful environment creation.

Screenshots of EVS onboarding

Let’s walk through the process of setting up a new VCF environment using Amazon EVS.

Screenshots of EVS onboarding

You will need to provide your Site ID, which is allocated by Broadcom when purchasing VCF licenses, along with your license keys. To ensure a successful initial deployment, you should verify you have sufficient licensing coverage for a minimum of 256 cores. This translates to at least four i4i.metal instances, with each instance providing 64 physical cores.

This licensing requirement helps you maintain optimal performance and ensures your environment meets the necessary infrastructure specifications. By confirming these requirements upfront, you can avoid potential deployment delays and ensure a smooth setup process.

Screenshots of EVS onboarding

Once you have provided all the required details, you will be prompted to specify your host details. These are the underlying Amazon EC2 instances that your VCF environment will get deployed in.

Screenshots of EVS onboarding

Once you have filled out details for each of your host instances, you will need to configure your networking and management appliance DNS details. For further information on how to create a new VCF environment on Amazon EVS, follow the documentation here.

Screenshots of EVS onboarding

After you have created your VCF environment, you will be able to look over all of the host and configuration details through the AWS Console.

Additional things to know

Amazon EVS currently supports VCF version 5.2.1 and runs on i4i.metal instances. Future releases will expand VCF versions, licensing options, and more instance type support to provide even more flexibility for your deployments.

Amazon EVS provides flexible storage options. Your Amazon EVS local Instance storage is powered by VMware’s vSAN solution, which pools local disks across multiple ESXi hosts into a single distributed datastore. To scale your storage, you can leverage external Network File System (NFS) or iSCSI-based storage solutions. For example, Amazon FSx for NetApp ONTAP is particularly well-suited for use as an NFS datastore or shared block storage over iSCSI.

Additionally, Amazon EVS makes connecting your on-premises environments to AWS simple. You can connect from on-premises vSphere environment into Amazon EVS using a Direct Connect connection or a VPN that terminates into a transit gateway. Amazon EVS also manages the underlying connectivity from your VLAN subnets into your VMs.

AWS provides comprehensive support for all AWS services deployed by Amazon EVS, handling direct customer support while engaging with Broadcom for advanced support needs. Customers must maintain AWS Business Support on accounts running the service.

Availability and pricing

Amazon EVS is now generally available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), and Asia Pacific (Tokyo) AWS Regions, with additional Regions coming soon. Pricing is based on the Amazon EC2 instances and AWS resources you use, with no minimum fees or upfront commitments.

To learn more, visit the Amazon EVS product page.

AWS Weekly Roundup: Amazon DocumentDB, AWS Lambda, Amazon EC2, and more (August 4, 2025)

This post was originally published on this site

This week brings an array of innovations spanning from generative AI capabilities to enhancements of foundational services. Whether you’re building AI-powered applications, managing databases, or optimizing your cloud infrastructure, these updates help build more advanced, robust, and flexible applications.

Last week’s launches
Here are the launches that got my attention this week:

Additional updates
Here are some additional projects, blog posts, and news items that I found interesting:

Upcoming AWS events
Check your calendars so that you can sign up for these upcoming events:

AWS re:Invent 2025 (December 1-5, 2025, Las Vegas) — AWS’s flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities.

AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Mexico City (August 6) and Jakarta (August 7).

AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Australia (August 15), Adria (September 5), Baltic (September 10), and Aotearoa (September 18).

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Danilo

Legacy May Kill, (Sun, Aug 3rd)

This post was originally published on this site

Just saw something that I thought was long gone. The username "pop3user" is showing up in our telnet/ssh logs. I don't know how long ago it was that I used POP3 to retrieve e-mail from one of my mail servers. IMAP and various webmail systems have long since replaced this classic email protocol. But at least this one attacker is counting on someone still having a "pop3user" configured.

Introducing Amazon Application Recovery Controller Region switch: A multi-Region application recovery service

This post was originally published on this site

As a developer advocate at AWS, I’ve worked with many enterprise organizations who operate critical applications across multiple AWS Regions. A key concern they often share is the lack of confidence in their Region failover strategy—whether it will work when needed, whether all dependencies have been identified, and whether their teams have practiced the procedures enough. Traditional approaches often leave them uncertain about their readiness for Regional switch.

Today, I’m excited to announce Amazon Application Recovery Controller (ARC) Region switch, a fully managed, highly available capability that enables organizations to plan, practice, and orchestrate Region switches with confidence, eliminating the uncertainty around cross-Region recovery operations. Region switch helps you orchestrate recovery for your multi-Region applications on AWS. It gives you a centralized solution to coordinate and automate recovery tasks across AWS services and accounts when you need to switch your application’s operations from one AWS Region to another.

Many customers deploy business-critical applications across multiple AWS Regions to meet their availability requirements. When an operational event impacts an application in one Region, switching operations to another Region involves coordinating multiple steps across different AWS services, such as compute, databases, and DNS. This coordination typically requires building and maintaining complex scripts that need regular testing and updates as applications evolve. Additionally, orchestrating and tracking the progress of Region switches across multiple applications and providing evidence of successful recovery for compliance purposes often involves manual data gathering.

Region switch is built on a Regional data plane architecture, where Region switch plans are executed from the Region being activated. This design eliminates dependencies on the impacted Region during the switch, providing a more resilient recovery process since the execution is independent of the Region you’re switching from.

Building a recovery plan with ARC Region switch
With ARC Region switch, you can create recovery plans that define the specific steps needed to switch your application between Regions. Each plan contains execution blocks that represent actions on AWS resources. At launch, Region switch supports nine types of execution blocks:

  • ARC Region switch plan execution block–let you orchestrate the order in which multiple applications switch to the Region you want to activate by referencing other Region switch plans.
  • Amazon EC2 Auto Scaling execution block–Scales Amazon EC2 compute resources in your target Region by matching a specified percentage of your source Region’s capacity.
  • ARC routing controls execution block–Changes routing control states to redirect traffic using DNS health checks.
  • Amazon Aurora global database execution block–Performs database failover with potential data loss or switchover with zero data loss for Aurora Global Database.
  • Manual approval execution block–Adds approval checkpoints in your recovery workflow where team members can review and approve before proceeding.
  • Custom Action AWS Lambda execution block–Adds custom recovery steps by executing Lambda functions in either the activating or deactivating Region.
  • Amazon Route 53 health check execution block–Let you to specify which Regions your application’s traffic will be redirected to during failover. When executing your Region switch plan, the Amazon Route 53 health check state is updated and traffic is redirected based on your DNS configuration.
  • Amazon Elastic Kubernetes Service (Amazon EKS) resource scaling execution block–Scales Kubernetes pods in your target Region during recovery by matching a specified percentage of your source Region’s capacity.
  • Amazon Elastic Container Service (Amazon ECS) resource scaling execution block–Scales ECS tasks in your target Region by matching a specified percentage of your source Region’s capacity.

Region switch continually validates your plans by checking resource configurations and AWS Identity and Access Management (IAM) permissions every 30 minutes. During execution, Region switch monitors the progress of each step and provides detailed logs. You can view execution status through the Region switch dashboard and at the bottom of the execution details page.

To help you balance cost and reliability, Region switch offers flexibility in how you prepare your standby resources. You can configure the desired percentage of compute capacity to target in your destination Region during recovery using Region switch scaling execution blocks. For critical applications expecting surge traffic during recovery, you might choose to scale beyond 100 percent capacity, and setting a lower percentage can help achieve faster overall execution times. However, it’s important to note that using one of the scaling execution blocks does not guarantee capacity, and actual resource availability depends on the capacity in the destination Region at the time of recovery. To facilitate the best possible outcomes, we recommend regularly testing your recovery plans and maintaining appropriate Service Quotas in your standby Regions.

ARC Region switch includes a global dashboard you can use to monitor the status of Region switch plans across your enterprise and Regions. Additionally, there’s a Regional executions dashboard that only displays executions within the current console Region. This dashboard is designed to be highly available across each Region so it can be used during operational events.

Region switch allows resources to be hosted in an account that is separate from the account that contains the Region switch plan. If the plan uses resources from an account that is different from the account that hosts the plan, then Region switch uses the executionRole to assume the crossAccountRole to access those resources. Additionally, Region switch plans can be centralized and shared across multiple accounts using AWS Resource Access Manager (AWS RAM), enabling efficient management of recovery plans across your organization.

Let’s see how it works
Let me show you how to create and execute a Region switch plan. There are three parts in this demo. First, I create a Region switch plan. Then, I define a workflow. Finally, I configure the triggers.

Step 1: Create a plan

I navigate to the Application Recovery Controller section of the AWS Management Console. I choose Region switch in the left navigation menu. Then, I choose Create Region switch plan.

ARC Region switch - 1

After I give a name to my plan, I specify a Multi-Region recovery approach (active/passive or active/active). In Active/Passive mode, two application replicas are deployed into two Regions, with traffic routed into the active Region only. The replica in the passive Region can be activated by executing the Region switch plan.

Then, I select the Primary Region and Standby Region. Optionally, I can enter a Desired recovery time objective (RTO). The service will use this value to provide insight into how long Region switch plan executions take in relation to my desired RTO.

ARC Region switch - create plan

I enter the Plan execution IAM role. This is the role that allows Region switch to call AWS services during execution. I make sure the role I choose has permissions to be invoked by the service and contains the minimum set of permissions allowing ARC to operate. Refer to the IAM permissions section of the documentation for the details.

ARC Region switch - create plan 2Step 2: Create a workflow

When the two Plan evaluation status notifications are green, I create a workflow. I choose Build workflows to get started.


ARC Region switch - status

Plans enable you to build specific workflows that will recover your applications using Region switch execution blocks. You can build workflows with execution blocks that run sequentially or in parallel to orchestrate the order in which multiple applications or resources recover into the activating Region. A plan is made up of these workflows that allow you to activate or deactivate a specific Region.

For this demo, I use the graphical editor to create the workflow. But you can also define the workflow in JSON. This format is better suited for automation or when you want to store your workflow definition in a source code management system (SCMS) and your infrastructure as code (IaC) tools, such as AWS CloudFormation.

ARC - define workflows

I can alternate between the Design and the Code views by selecting the corresponding tab next to the Workflow builder title. The JSON view is read-only. I designed the workflow with the graphical editor and I copied the JSON equivalent to store it alongside my IaC project files.

ARC - define workflows as code

Region switch launches an evaluation to validate your recovery strategy every 30 minutes. It regularly checks that all actions defined in your workflows will succeed when executed. This proactive validation assesses various elements, including IAM permissions and resource states across accounts and Regions. By continually monitoring these dependencies, Region switch helps ensure your recovery plans remain viable and identifies potential issues before they impact your actual switch operations.

However, just as an untested backup is not a reliable backup, an untested recovery plan cannot be considered truly validated. While continuous evaluation provides a strong foundation, we strongly recommend regularly executing your plans in test scenarios to verify their effectiveness, understand actual recovery times, and ensure your teams are familiar with the recovery procedures. This hands-on testing is essential for maintaining confidence in your disaster recovery strategy.

Step 3: Create a trigger

A trigger defines the conditions to activate the workflows just created. It’s expressed as a set of CloudWatch alarms. Alarm-based triggers are optional. You can also use Region switch with manual triggers.

From the Region switch page in the console, I choose the Triggers tab and choose Add triggers.

ARC - Trigger

For each Region defined in my plan, I choose Add trigger to define the triggers that will activate the Region.ARC - Trigger 2Finally, I choose the alarms and their state (OK or Alarm) that Region switch will use to trigger the activation of the Region.

ARC - Trigger 3

I’m now ready to test the execution of the plan to switch Regions using Region switch. It’s important to execute the plan from the Region I’m activating (the target Region of the workflow) and use the data plane in that specific Region.

Here is how to execute a plan using the AWS Command Line Interface (AWS CLI):

aws arc-region-switch start-plan-execution 
--plan-arn arn:aws:arc-region-switch::111122223333:plan/resource-id 
--target-region us-west-2 
--action activate

Pricing and availability
Region switch is available in all commercial AWS Regions at $70 per month per plan. Each plan can include up to 100 execution blocks, or you can create parent plans to orchestrate up to 25 child plans.

Having seen firsthand the engineering effort that goes into building and maintaining multi-Region recovery solutions, I’m thrilled to see how Region switch will help automate this process for our customers. To get started with ARC Region switch, visit the ARC console and create your first Region switch plan. For more information about Region switch, visit the Amazon Application Recovery Controller (ARC) documentation. You can also reach out to your AWS account team with questions about using Region switch for your multi-Region applications.

I look forward to hearing about how you use Region switch to strengthen your multi-Region applications’ resilience.

— seb